2025-07-04 17:22:09.461502 | Job console starting 2025-07-04 17:22:09.494451 | Updating git repos 2025-07-04 17:22:09.590706 | Cloning repos into workspace 2025-07-04 17:22:09.933926 | Restoring repo states 2025-07-04 17:22:09.965148 | Merging changes 2025-07-04 17:22:09.965171 | Checking out repos 2025-07-04 17:22:10.293051 | Preparing playbooks 2025-07-04 17:22:11.353149 | Running Ansible setup 2025-07-04 17:22:16.325145 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-04 17:22:17.119275 | 2025-07-04 17:22:17.119448 | PLAY [Base pre] 2025-07-04 17:22:17.136495 | 2025-07-04 17:22:17.136649 | TASK [Setup log path fact] 2025-07-04 17:22:17.179743 | orchestrator | ok 2025-07-04 17:22:17.226137 | 2025-07-04 17:22:17.226316 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-04 17:22:17.258778 | orchestrator | ok 2025-07-04 17:22:17.271721 | 2025-07-04 17:22:17.271863 | TASK [emit-job-header : Print job information] 2025-07-04 17:22:17.315559 | # Job Information 2025-07-04 17:22:17.315871 | Ansible Version: 2.16.14 2025-07-04 17:22:17.315937 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-07-04 17:22:17.315997 | Pipeline: post 2025-07-04 17:22:17.316040 | Executor: 521e9411259a 2025-07-04 17:22:17.316077 | Triggered by: https://github.com/osism/testbed/commit/44990e8f7b9bedef8efaa8f19f340f05991fb452 2025-07-04 17:22:17.316116 | Event ID: 65d686d8-58fb-11f0-95d5-45b4b71ec47f 2025-07-04 17:22:17.325375 | 2025-07-04 17:22:17.325538 | LOOP [emit-job-header : Print node information] 2025-07-04 17:22:17.500716 | orchestrator | ok: 2025-07-04 17:22:17.501021 | orchestrator | # Node Information 2025-07-04 17:22:17.501064 | orchestrator | Inventory Hostname: orchestrator 2025-07-04 17:22:17.501090 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-04 17:22:17.501113 | orchestrator | Username: zuul-testbed05 2025-07-04 17:22:17.501134 | orchestrator | Distro: Debian 12.11 2025-07-04 17:22:17.501162 | orchestrator | Provider: static-testbed 2025-07-04 17:22:17.501184 | orchestrator | Region: 2025-07-04 17:22:17.501206 | orchestrator | Label: testbed-orchestrator 2025-07-04 17:22:17.501227 | orchestrator | Product Name: OpenStack Nova 2025-07-04 17:22:17.501246 | orchestrator | Interface IP: 81.163.193.140 2025-07-04 17:22:17.521158 | 2025-07-04 17:22:17.521313 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-04 17:22:18.082444 | orchestrator -> localhost | changed 2025-07-04 17:22:18.096836 | 2025-07-04 17:22:18.097470 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-04 17:22:19.376419 | orchestrator -> localhost | changed 2025-07-04 17:22:19.416552 | 2025-07-04 17:22:19.416694 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-04 17:22:19.758107 | orchestrator -> localhost | ok 2025-07-04 17:22:19.767456 | 2025-07-04 17:22:19.767596 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-04 17:22:19.797229 | orchestrator | ok 2025-07-04 17:22:19.814753 | orchestrator | included: /var/lib/zuul/builds/7885af844d2e46e9b44bce6c93c3bd94/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-04 17:22:19.823362 | 2025-07-04 17:22:19.823467 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-04 17:22:21.218634 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-04 17:22:21.219050 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/7885af844d2e46e9b44bce6c93c3bd94/work/7885af844d2e46e9b44bce6c93c3bd94_id_rsa 2025-07-04 17:22:21.219099 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/7885af844d2e46e9b44bce6c93c3bd94/work/7885af844d2e46e9b44bce6c93c3bd94_id_rsa.pub 2025-07-04 17:22:21.219127 | orchestrator -> localhost | The key fingerprint is: 2025-07-04 17:22:21.219156 | orchestrator -> localhost | SHA256:ApjryDO5G8KYalhv3kcivJ5ldU8iKaQtH9Vtt2AEKbE zuul-build-sshkey 2025-07-04 17:22:21.219179 | orchestrator -> localhost | The key's randomart image is: 2025-07-04 17:22:21.219214 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-04 17:22:21.219236 | orchestrator -> localhost | | ...o. | 2025-07-04 17:22:21.219258 | orchestrator -> localhost | | o .o.o | 2025-07-04 17:22:21.219279 | orchestrator -> localhost | | o . . E.. = . | 2025-07-04 17:22:21.219299 | orchestrator -> localhost | | . = . . o o . | 2025-07-04 17:22:21.219319 | orchestrator -> localhost | | ..o = S o . . | 2025-07-04 17:22:21.219346 | orchestrator -> localhost | |++o oo.=.o + | 2025-07-04 17:22:21.219367 | orchestrator -> localhost | |*O.. o+o . | 2025-07-04 17:22:21.219387 | orchestrator -> localhost | |+.= += . | 2025-07-04 17:22:21.219409 | orchestrator -> localhost | |oo.++ .. | 2025-07-04 17:22:21.219430 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-04 17:22:21.219491 | orchestrator -> localhost | ok: Runtime: 0:00:00.881062 2025-07-04 17:22:21.227608 | 2025-07-04 17:22:21.227724 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-04 17:22:21.257421 | orchestrator | ok 2025-07-04 17:22:21.267564 | orchestrator | included: /var/lib/zuul/builds/7885af844d2e46e9b44bce6c93c3bd94/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-04 17:22:21.276880 | 2025-07-04 17:22:21.276986 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-04 17:22:21.300532 | orchestrator | skipping: Conditional result was False 2025-07-04 17:22:21.308509 | 2025-07-04 17:22:21.308617 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-04 17:22:22.515866 | orchestrator | changed 2025-07-04 17:22:22.526275 | 2025-07-04 17:22:22.526403 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-04 17:22:22.864331 | orchestrator | ok 2025-07-04 17:22:22.872133 | 2025-07-04 17:22:22.872340 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-04 17:22:23.284645 | orchestrator | ok 2025-07-04 17:22:23.292128 | 2025-07-04 17:22:23.292242 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-04 17:22:23.677506 | orchestrator | ok 2025-07-04 17:22:23.683930 | 2025-07-04 17:22:23.684038 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-04 17:22:23.707825 | orchestrator | skipping: Conditional result was False 2025-07-04 17:22:23.717217 | 2025-07-04 17:22:23.717344 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-04 17:22:24.190779 | orchestrator -> localhost | changed 2025-07-04 17:22:24.205420 | 2025-07-04 17:22:24.205544 | TASK [add-build-sshkey : Add back temp key] 2025-07-04 17:22:24.544200 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/7885af844d2e46e9b44bce6c93c3bd94/work/7885af844d2e46e9b44bce6c93c3bd94_id_rsa (zuul-build-sshkey) 2025-07-04 17:22:24.544560 | orchestrator -> localhost | ok: Runtime: 0:00:00.015431 2025-07-04 17:22:24.556115 | 2025-07-04 17:22:24.556238 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-04 17:22:24.986262 | orchestrator | ok 2025-07-04 17:22:24.994254 | 2025-07-04 17:22:24.994367 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-04 17:22:25.018800 | orchestrator | skipping: Conditional result was False 2025-07-04 17:22:25.092513 | 2025-07-04 17:22:25.092763 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-04 17:22:25.532348 | orchestrator | ok 2025-07-04 17:22:25.547047 | 2025-07-04 17:22:25.547175 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-04 17:22:25.584521 | orchestrator | ok 2025-07-04 17:22:25.594707 | 2025-07-04 17:22:25.594888 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-04 17:22:25.903180 | orchestrator -> localhost | ok 2025-07-04 17:22:25.914277 | 2025-07-04 17:22:25.914409 | TASK [validate-host : Collect information about the host] 2025-07-04 17:22:27.208989 | orchestrator | ok 2025-07-04 17:22:27.240981 | 2025-07-04 17:22:27.241614 | TASK [validate-host : Sanitize hostname] 2025-07-04 17:22:27.359725 | orchestrator | ok 2025-07-04 17:22:27.374331 | 2025-07-04 17:22:27.374597 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-04 17:22:28.162699 | orchestrator -> localhost | changed 2025-07-04 17:22:28.169772 | 2025-07-04 17:22:28.169925 | TASK [validate-host : Collect information about zuul worker] 2025-07-04 17:22:28.626928 | orchestrator | ok 2025-07-04 17:22:28.633145 | 2025-07-04 17:22:28.633282 | TASK [validate-host : Write out all zuul information for each host] 2025-07-04 17:22:29.193082 | orchestrator -> localhost | changed 2025-07-04 17:22:29.208002 | 2025-07-04 17:22:29.208126 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-04 17:22:29.500629 | orchestrator | ok 2025-07-04 17:22:29.507530 | 2025-07-04 17:22:29.507645 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-04 17:23:41.386988 | orchestrator | changed: 2025-07-04 17:23:41.387412 | orchestrator | .d..t...... src/ 2025-07-04 17:23:41.387476 | orchestrator | .d..t...... src/github.com/ 2025-07-04 17:23:41.387626 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-04 17:23:41.387666 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-04 17:23:41.387696 | orchestrator | RedHat.yml 2025-07-04 17:23:41.408254 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-04 17:23:41.408273 | orchestrator | RedHat.yml 2025-07-04 17:23:41.408425 | orchestrator | = 1.53.0"... 2025-07-04 17:23:59.269330 | orchestrator | 17:23:59.269 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-07-04 17:23:59.300052 | orchestrator | 17:23:59.299 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-07-04 17:24:00.267404 | orchestrator | 17:24:00.267 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.2.0... 2025-07-04 17:24:01.439542 | orchestrator | 17:24:01.439 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.2.0 (signed, key ID 4F80527A391BEFD2) 2025-07-04 17:24:02.311507 | orchestrator | 17:24:02.311 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-04 17:24:04.015528 | orchestrator | 17:24:04.015 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-04 17:24:05.084844 | orchestrator | 17:24:05.084 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-04 17:24:06.014793 | orchestrator | 17:24:06.014 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-04 17:24:06.015039 | orchestrator | 17:24:06.014 STDOUT terraform: Providers are signed by their developers. 2025-07-04 17:24:06.015051 | orchestrator | 17:24:06.014 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-04 17:24:06.015056 | orchestrator | 17:24:06.015 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-04 17:24:06.015329 | orchestrator | 17:24:06.015 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-04 17:24:06.015341 | orchestrator | 17:24:06.015 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-04 17:24:06.015348 | orchestrator | 17:24:06.015 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-04 17:24:06.015352 | orchestrator | 17:24:06.015 STDOUT terraform: you run "tofu init" in the future. 2025-07-04 17:24:06.015936 | orchestrator | 17:24:06.015 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-04 17:24:06.016267 | orchestrator | 17:24:06.015 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-04 17:24:06.016277 | orchestrator | 17:24:06.016 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-04 17:24:06.016282 | orchestrator | 17:24:06.016 STDOUT terraform: should now work. 2025-07-04 17:24:06.016286 | orchestrator | 17:24:06.016 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-04 17:24:06.016290 | orchestrator | 17:24:06.016 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-04 17:24:06.016295 | orchestrator | 17:24:06.016 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-04 17:24:06.138491 | orchestrator | 17:24:06.138 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-07-04 17:24:06.138724 | orchestrator | 17:24:06.138 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-04 17:24:06.369072 | orchestrator | 17:24:06.368 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-04 17:24:06.369191 | orchestrator | 17:24:06.368 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-04 17:24:06.369221 | orchestrator | 17:24:06.368 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-04 17:24:06.369234 | orchestrator | 17:24:06.369 STDOUT terraform: for this configuration. 2025-07-04 17:24:06.555270 | orchestrator | 17:24:06.555 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-07-04 17:24:06.555379 | orchestrator | 17:24:06.555 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-04 17:24:06.669759 | orchestrator | 17:24:06.669 STDOUT terraform: ci.auto.tfvars 2025-07-04 17:24:06.707991 | orchestrator | 17:24:06.707 STDOUT terraform: default_custom.tf 2025-07-04 17:24:07.343858 | orchestrator | 17:24:07.343 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-07-04 17:24:09.058392 | orchestrator | 17:24:09.058 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-04 17:24:09.554467 | orchestrator | 17:24:09.554 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-04 17:24:09.741951 | orchestrator | 17:24:09.741 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-04 17:24:09.742046 | orchestrator | 17:24:09.741 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-04 17:24:09.742071 | orchestrator | 17:24:09.742 STDOUT terraform:  + create 2025-07-04 17:24:09.742118 | orchestrator | 17:24:09.742 STDOUT terraform:  <= read (data resources) 2025-07-04 17:24:09.742189 | orchestrator | 17:24:09.742 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-04 17:24:09.742337 | orchestrator | 17:24:09.742 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-04 17:24:09.742401 | orchestrator | 17:24:09.742 STDOUT terraform:  # (config refers to values not yet known) 2025-07-04 17:24:09.742465 | orchestrator | 17:24:09.742 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-04 17:24:09.742526 | orchestrator | 17:24:09.742 STDOUT terraform:  + checksum = (known after apply) 2025-07-04 17:24:09.742589 | orchestrator | 17:24:09.742 STDOUT terraform:  + created_at = (known after apply) 2025-07-04 17:24:09.742679 | orchestrator | 17:24:09.742 STDOUT terraform:  + file = (known after apply) 2025-07-04 17:24:09.742743 | orchestrator | 17:24:09.742 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.742801 | orchestrator | 17:24:09.742 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.742866 | orchestrator | 17:24:09.742 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-04 17:24:09.742928 | orchestrator | 17:24:09.742 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-04 17:24:09.742969 | orchestrator | 17:24:09.742 STDOUT terraform:  + most_recent = true 2025-07-04 17:24:09.743033 | orchestrator | 17:24:09.742 STDOUT terraform:  + name = (known after apply) 2025-07-04 17:24:09.743092 | orchestrator | 17:24:09.743 STDOUT terraform:  + protected = (known after apply) 2025-07-04 17:24:09.743153 | orchestrator | 17:24:09.743 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.743218 | orchestrator | 17:24:09.743 STDOUT terraform:  + schema = (known after apply) 2025-07-04 17:24:09.743278 | orchestrator | 17:24:09.743 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-04 17:24:09.743339 | orchestrator | 17:24:09.743 STDOUT terraform:  + tags = (known after apply) 2025-07-04 17:24:09.743401 | orchestrator | 17:24:09.743 STDOUT terraform:  + updated_at = (known after apply) 2025-07-04 17:24:09.743431 | orchestrator | 17:24:09.743 STDOUT terraform:  } 2025-07-04 17:24:09.743558 | orchestrator | 17:24:09.743 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-04 17:24:09.743642 | orchestrator | 17:24:09.743 STDOUT terraform:  # (config refers to values not yet known) 2025-07-04 17:24:09.743720 | orchestrator | 17:24:09.743 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-04 17:24:09.743789 | orchestrator | 17:24:09.743 STDOUT terraform:  + checksum = (known after apply) 2025-07-04 17:24:09.743849 | orchestrator | 17:24:09.743 STDOUT terraform:  + created_at = (known after apply) 2025-07-04 17:24:09.743917 | orchestrator | 17:24:09.743 STDOUT terraform:  + file = (known after apply) 2025-07-04 17:24:09.743985 | orchestrator | 17:24:09.743 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.744043 | orchestrator | 17:24:09.743 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.744104 | orchestrator | 17:24:09.744 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-04 17:24:09.744165 | orchestrator | 17:24:09.744 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-04 17:24:09.744221 | orchestrator | 17:24:09.744 STDOUT terraform:  + most_recent = true 2025-07-04 17:24:09.744285 | orchestrator | 17:24:09.744 STDOUT terraform:  + name = (known after apply) 2025-07-04 17:24:09.744344 | orchestrator | 17:24:09.744 STDOUT terraform:  + protected = (known after apply) 2025-07-04 17:24:09.744405 | orchestrator | 17:24:09.744 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.744466 | orchestrator | 17:24:09.744 STDOUT terraform:  + schema = (known after apply) 2025-07-04 17:24:09.744546 | orchestrator | 17:24:09.744 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-04 17:24:09.744650 | orchestrator | 17:24:09.744 STDOUT terraform:  + tags = (known after apply) 2025-07-04 17:24:09.744715 | orchestrator | 17:24:09.744 STDOUT terraform:  + updated_at = (known after apply) 2025-07-04 17:24:09.744743 | orchestrator | 17:24:09.744 STDOUT terraform:  } 2025-07-04 17:24:09.744807 | orchestrator | 17:24:09.744 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-04 17:24:09.744870 | orchestrator | 17:24:09.744 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-04 17:24:09.744955 | orchestrator | 17:24:09.744 STDOUT terraform:  + content = (known after apply) 2025-07-04 17:24:09.745035 | orchestrator | 17:24:09.744 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-04 17:24:09.745110 | orchestrator | 17:24:09.745 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-04 17:24:09.745185 | orchestrator | 17:24:09.745 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-04 17:24:09.745258 | orchestrator | 17:24:09.745 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-04 17:24:09.745348 | orchestrator | 17:24:09.745 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-04 17:24:09.745424 | orchestrator | 17:24:09.745 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-04 17:24:09.745473 | orchestrator | 17:24:09.745 STDOUT terraform:  + directory_permission = "0777" 2025-07-04 17:24:09.745525 | orchestrator | 17:24:09.745 STDOUT terraform:  + file_permission = "0644" 2025-07-04 17:24:09.745643 | orchestrator | 17:24:09.745 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-04 17:24:09.745696 | orchestrator | 17:24:09.745 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.745728 | orchestrator | 17:24:09.745 STDOUT terraform:  } 2025-07-04 17:24:09.745787 | orchestrator | 17:24:09.745 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-04 17:24:09.745848 | orchestrator | 17:24:09.745 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-04 17:24:09.745926 | orchestrator | 17:24:09.745 STDOUT terraform:  + content = (known after apply) 2025-07-04 17:24:09.746001 | orchestrator | 17:24:09.745 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-04 17:24:09.746143 | orchestrator | 17:24:09.746 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-04 17:24:09.746227 | orchestrator | 17:24:09.746 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-04 17:24:09.746304 | orchestrator | 17:24:09.746 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-04 17:24:09.746380 | orchestrator | 17:24:09.746 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-04 17:24:09.746458 | orchestrator | 17:24:09.746 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-04 17:24:09.746506 | orchestrator | 17:24:09.746 STDOUT terraform:  + directory_permission = "0777" 2025-07-04 17:24:09.746560 | orchestrator | 17:24:09.746 STDOUT terraform:  + file_permission = "0644" 2025-07-04 17:24:09.746644 | orchestrator | 17:24:09.746 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-04 17:24:09.746722 | orchestrator | 17:24:09.746 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.746750 | orchestrator | 17:24:09.746 STDOUT terraform:  } 2025-07-04 17:24:09.746800 | orchestrator | 17:24:09.746 STDOUT terraform:  # local_file.inventory will be created 2025-07-04 17:24:09.746851 | orchestrator | 17:24:09.746 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-04 17:24:09.746926 | orchestrator | 17:24:09.746 STDOUT terraform:  + content = (known after apply) 2025-07-04 17:24:09.747000 | orchestrator | 17:24:09.746 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-04 17:24:09.747078 | orchestrator | 17:24:09.746 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-04 17:24:09.747155 | orchestrator | 17:24:09.747 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-04 17:24:09.747229 | orchestrator | 17:24:09.747 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-04 17:24:09.747303 | orchestrator | 17:24:09.747 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-04 17:24:09.747381 | orchestrator | 17:24:09.747 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-04 17:24:09.747432 | orchestrator | 17:24:09.747 STDOUT terraform:  + directory_permission = "0777" 2025-07-04 17:24:09.747481 | orchestrator | 17:24:09.747 STDOUT terraform:  + file_permission = "0644" 2025-07-04 17:24:09.747543 | orchestrator | 17:24:09.747 STDOUT terraform:  + filename = "inventory.ci" 2025-07-04 17:24:09.747649 | orchestrator | 17:24:09.747 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.747667 | orchestrator | 17:24:09.747 STDOUT terraform:  } 2025-07-04 17:24:09.747859 | orchestrator | 17:24:09.747 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-04 17:24:09.747928 | orchestrator | 17:24:09.747 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-04 17:24:09.747998 | orchestrator | 17:24:09.747 STDOUT terraform:  + content = (sensitive value) 2025-07-04 17:24:09.748076 | orchestrator | 17:24:09.747 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-04 17:24:09.748149 | orchestrator | 17:24:09.748 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-04 17:24:09.748224 | orchestrator | 17:24:09.748 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-04 17:24:09.748297 | orchestrator | 17:24:09.748 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-04 17:24:09.748376 | orchestrator | 17:24:09.748 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-04 17:24:09.748449 | orchestrator | 17:24:09.748 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-04 17:24:09.748499 | orchestrator | 17:24:09.748 STDOUT terraform:  + directory_permission = "0700" 2025-07-04 17:24:09.748550 | orchestrator | 17:24:09.748 STDOUT terraform:  + file_permission = "0600" 2025-07-04 17:24:09.748656 | orchestrator | 17:24:09.748 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-04 17:24:09.748744 | orchestrator | 17:24:09.748 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.748871 | orchestrator | 17:24:09.748 STDOUT terraform:  } 2025-07-04 17:24:09.748878 | orchestrator | 17:24:09.748 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-04 17:24:09.748890 | orchestrator | 17:24:09.748 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-04 17:24:09.748916 | orchestrator | 17:24:09.748 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.748998 | orchestrator | 17:24:09.748 STDOUT terraform:  } 2025-07-04 17:24:09.749107 | orchestrator | 17:24:09.748 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-04 17:24:09.749204 | orchestrator | 17:24:09.749 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-04 17:24:09.749283 | orchestrator | 17:24:09.749 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.749330 | orchestrator | 17:24:09.749 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.749400 | orchestrator | 17:24:09.749 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.750180 | orchestrator | 17:24:09.749 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.750210 | orchestrator | 17:24:09.749 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.750215 | orchestrator | 17:24:09.749 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-04 17:24:09.750219 | orchestrator | 17:24:09.749 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.750223 | orchestrator | 17:24:09.749 STDOUT terraform:  + size = 80 2025-07-04 17:24:09.750227 | orchestrator | 17:24:09.749 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.750232 | orchestrator | 17:24:09.749 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.750236 | orchestrator | 17:24:09.749 STDOUT terraform:  } 2025-07-04 17:24:09.750241 | orchestrator | 17:24:09.749 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-04 17:24:09.750245 | orchestrator | 17:24:09.749 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-04 17:24:09.750249 | orchestrator | 17:24:09.749 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.750253 | orchestrator | 17:24:09.750 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.750260 | orchestrator | 17:24:09.750 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.750264 | orchestrator | 17:24:09.750 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.750307 | orchestrator | 17:24:09.750 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.750399 | orchestrator | 17:24:09.750 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-04 17:24:09.750467 | orchestrator | 17:24:09.750 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.750507 | orchestrator | 17:24:09.750 STDOUT terraform:  + size = 80 2025-07-04 17:24:09.750555 | orchestrator | 17:24:09.750 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.750601 | orchestrator | 17:24:09.750 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.750653 | orchestrator | 17:24:09.750 STDOUT terraform:  } 2025-07-04 17:24:09.750728 | orchestrator | 17:24:09.750 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-04 17:24:09.750815 | orchestrator | 17:24:09.750 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-04 17:24:09.750905 | orchestrator | 17:24:09.750 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.750980 | orchestrator | 17:24:09.750 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.751106 | orchestrator | 17:24:09.750 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.751201 | orchestrator | 17:24:09.751 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.751272 | orchestrator | 17:24:09.751 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.751364 | orchestrator | 17:24:09.751 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-04 17:24:09.751436 | orchestrator | 17:24:09.751 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.751477 | orchestrator | 17:24:09.751 STDOUT terraform:  + size = 80 2025-07-04 17:24:09.751524 | orchestrator | 17:24:09.751 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.751574 | orchestrator | 17:24:09.751 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.751600 | orchestrator | 17:24:09.751 STDOUT terraform:  } 2025-07-04 17:24:09.751747 | orchestrator | 17:24:09.751 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-04 17:24:09.751842 | orchestrator | 17:24:09.751 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-04 17:24:09.751907 | orchestrator | 17:24:09.751 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.751954 | orchestrator | 17:24:09.751 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.752027 | orchestrator | 17:24:09.751 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.752103 | orchestrator | 17:24:09.752 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.752170 | orchestrator | 17:24:09.752 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.752247 | orchestrator | 17:24:09.752 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-04 17:24:09.752306 | orchestrator | 17:24:09.752 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.752345 | orchestrator | 17:24:09.752 STDOUT terraform:  + size = 80 2025-07-04 17:24:09.752384 | orchestrator | 17:24:09.752 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.752423 | orchestrator | 17:24:09.752 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.752443 | orchestrator | 17:24:09.752 STDOUT terraform:  } 2025-07-04 17:24:09.752521 | orchestrator | 17:24:09.752 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-04 17:24:09.752595 | orchestrator | 17:24:09.752 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-04 17:24:09.752669 | orchestrator | 17:24:09.752 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.752708 | orchestrator | 17:24:09.752 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.752768 | orchestrator | 17:24:09.752 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.752826 | orchestrator | 17:24:09.752 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.752889 | orchestrator | 17:24:09.752 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.752969 | orchestrator | 17:24:09.752 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-04 17:24:09.753021 | orchestrator | 17:24:09.752 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.753054 | orchestrator | 17:24:09.753 STDOUT terraform:  + size = 80 2025-07-04 17:24:09.753095 | orchestrator | 17:24:09.753 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.753138 | orchestrator | 17:24:09.753 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.753160 | orchestrator | 17:24:09.753 STDOUT terraform:  } 2025-07-04 17:24:09.753240 | orchestrator | 17:24:09.753 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-04 17:24:09.753317 | orchestrator | 17:24:09.753 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-04 17:24:09.753382 | orchestrator | 17:24:09.753 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.753420 | orchestrator | 17:24:09.753 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.754130 | orchestrator | 17:24:09.753 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.754203 | orchestrator | 17:24:09.753 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.754214 | orchestrator | 17:24:09.753 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.754223 | orchestrator | 17:24:09.753 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-04 17:24:09.754231 | orchestrator | 17:24:09.753 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.754239 | orchestrator | 17:24:09.753 STDOUT terraform:  + size = 80 2025-07-04 17:24:09.754248 | orchestrator | 17:24:09.753 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.754256 | orchestrator | 17:24:09.753 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.754264 | orchestrator | 17:24:09.753 STDOUT terraform:  } 2025-07-04 17:24:09.754272 | orchestrator | 17:24:09.753 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-04 17:24:09.754281 | orchestrator | 17:24:09.753 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-04 17:24:09.754297 | orchestrator | 17:24:09.754 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.754305 | orchestrator | 17:24:09.754 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.754323 | orchestrator | 17:24:09.754 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.754331 | orchestrator | 17:24:09.754 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.754339 | orchestrator | 17:24:09.754 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.754377 | orchestrator | 17:24:09.754 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-04 17:24:09.754449 | orchestrator | 17:24:09.754 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.754470 | orchestrator | 17:24:09.754 STDOUT terraform:  + size = 80 2025-07-04 17:24:09.754506 | orchestrator | 17:24:09.754 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.754525 | orchestrator | 17:24:09.754 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.754539 | orchestrator | 17:24:09.754 STDOUT terraform:  } 2025-07-04 17:24:09.754658 | orchestrator | 17:24:09.754 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-04 17:24:09.754707 | orchestrator | 17:24:09.754 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-04 17:24:09.754760 | orchestrator | 17:24:09.754 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.754799 | orchestrator | 17:24:09.754 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.754881 | orchestrator | 17:24:09.754 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.754918 | orchestrator | 17:24:09.754 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.754982 | orchestrator | 17:24:09.754 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-04 17:24:09.755043 | orchestrator | 17:24:09.754 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.755078 | orchestrator | 17:24:09.755 STDOUT terraform:  + size = 20 2025-07-04 17:24:09.755119 | orchestrator | 17:24:09.755 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.755159 | orchestrator | 17:24:09.755 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.755172 | orchestrator | 17:24:09.755 STDOUT terraform:  } 2025-07-04 17:24:09.755269 | orchestrator | 17:24:09.755 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-04 17:24:09.755321 | orchestrator | 17:24:09.755 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-04 17:24:09.755379 | orchestrator | 17:24:09.755 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.755418 | orchestrator | 17:24:09.755 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.755480 | orchestrator | 17:24:09.755 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.755538 | orchestrator | 17:24:09.755 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.755603 | orchestrator | 17:24:09.755 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-04 17:24:09.755672 | orchestrator | 17:24:09.755 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.755709 | orchestrator | 17:24:09.755 STDOUT terraform:  + size = 20 2025-07-04 17:24:09.755753 | orchestrator | 17:24:09.755 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.755785 | orchestrator | 17:24:09.755 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.755797 | orchestrator | 17:24:09.755 STDOUT terraform:  } 2025-07-04 17:24:09.755872 | orchestrator | 17:24:09.755 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-04 17:24:09.755949 | orchestrator | 17:24:09.755 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-04 17:24:09.756008 | orchestrator | 17:24:09.755 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.756040 | orchestrator | 17:24:09.755 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.756099 | orchestrator | 17:24:09.756 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.756157 | orchestrator | 17:24:09.756 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.756220 | orchestrator | 17:24:09.756 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-04 17:24:09.756286 | orchestrator | 17:24:09.756 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.756306 | orchestrator | 17:24:09.756 STDOUT terraform:  + size = 20 2025-07-04 17:24:09.756351 | orchestrator | 17:24:09.756 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.756386 | orchestrator | 17:24:09.756 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.756398 | orchestrator | 17:24:09.756 STDOUT terraform:  } 2025-07-04 17:24:09.756477 | orchestrator | 17:24:09.756 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-04 17:24:09.756549 | orchestrator | 17:24:09.756 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-04 17:24:09.756628 | orchestrator | 17:24:09.756 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.756675 | orchestrator | 17:24:09.756 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.756735 | orchestrator | 17:24:09.756 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.756793 | orchestrator | 17:24:09.756 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.756857 | orchestrator | 17:24:09.756 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-04 17:24:09.756915 | orchestrator | 17:24:09.756 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.756949 | orchestrator | 17:24:09.756 STDOUT terraform:  + size = 20 2025-07-04 17:24:09.756989 | orchestrator | 17:24:09.756 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.757027 | orchestrator | 17:24:09.756 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.757039 | orchestrator | 17:24:09.757 STDOUT terraform:  } 2025-07-04 17:24:09.757115 | orchestrator | 17:24:09.757 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-04 17:24:09.757191 | orchestrator | 17:24:09.757 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-04 17:24:09.757248 | orchestrator | 17:24:09.757 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.757270 | orchestrator | 17:24:09.757 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.757328 | orchestrator | 17:24:09.757 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.757381 | orchestrator | 17:24:09.757 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.757447 | orchestrator | 17:24:09.757 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-04 17:24:09.757496 | orchestrator | 17:24:09.757 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.757532 | orchestrator | 17:24:09.757 STDOUT terraform:  + size = 20 2025-07-04 17:24:09.757568 | orchestrator | 17:24:09.757 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.757618 | orchestrator | 17:24:09.757 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.757630 | orchestrator | 17:24:09.757 STDOUT terraform:  } 2025-07-04 17:24:09.757701 | orchestrator | 17:24:09.757 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-04 17:24:09.757766 | orchestrator | 17:24:09.757 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-04 17:24:09.757821 | orchestrator | 17:24:09.757 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.757860 | orchestrator | 17:24:09.757 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.757915 | orchestrator | 17:24:09.757 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.757970 | orchestrator | 17:24:09.757 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.758659 | orchestrator | 17:24:09.757 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-04 17:24:09.758679 | orchestrator | 17:24:09.758 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.758686 | orchestrator | 17:24:09.758 STDOUT terraform:  + size = 20 2025-07-04 17:24:09.758710 | orchestrator | 17:24:09.758 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.758718 | orchestrator | 17:24:09.758 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.758724 | orchestrator | 17:24:09.758 STDOUT terraform:  } 2025-07-04 17:24:09.758731 | orchestrator | 17:24:09.758 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-04 17:24:09.758742 | orchestrator | 17:24:09.758 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-04 17:24:09.758784 | orchestrator | 17:24:09.758 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.758817 | orchestrator | 17:24:09.758 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.758894 | orchestrator | 17:24:09.758 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.758936 | orchestrator | 17:24:09.758 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.759005 | orchestrator | 17:24:09.758 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-04 17:24:09.759088 | orchestrator | 17:24:09.758 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.759114 | orchestrator | 17:24:09.759 STDOUT terraform:  + size = 20 2025-07-04 17:24:09.759152 | orchestrator | 17:24:09.759 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.759187 | orchestrator | 17:24:09.759 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.759198 | orchestrator | 17:24:09.759 STDOUT terraform:  } 2025-07-04 17:24:09.759263 | orchestrator | 17:24:09.759 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-04 17:24:09.759321 | orchestrator | 17:24:09.759 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-04 17:24:09.759369 | orchestrator | 17:24:09.759 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.759403 | orchestrator | 17:24:09.759 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.759454 | orchestrator | 17:24:09.759 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.759505 | orchestrator | 17:24:09.759 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.759558 | orchestrator | 17:24:09.759 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-04 17:24:09.759626 | orchestrator | 17:24:09.759 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.759643 | orchestrator | 17:24:09.759 STDOUT terraform:  + size = 20 2025-07-04 17:24:09.759675 | orchestrator | 17:24:09.759 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.759708 | orchestrator | 17:24:09.759 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.759719 | orchestrator | 17:24:09.759 STDOUT terraform:  } 2025-07-04 17:24:09.759780 | orchestrator | 17:24:09.759 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-04 17:24:09.759838 | orchestrator | 17:24:09.759 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-04 17:24:09.759888 | orchestrator | 17:24:09.759 STDOUT terraform:  + attachment = (known after apply) 2025-07-04 17:24:09.759919 | orchestrator | 17:24:09.759 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.759971 | orchestrator | 17:24:09.759 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.760019 | orchestrator | 17:24:09.759 STDOUT terraform:  + metadata = (known after apply) 2025-07-04 17:24:09.760074 | orchestrator | 17:24:09.760 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-04 17:24:09.760123 | orchestrator | 17:24:09.760 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.760152 | orchestrator | 17:24:09.760 STDOUT terraform:  + size = 20 2025-07-04 17:24:09.760185 | orchestrator | 17:24:09.760 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-04 17:24:09.760218 | orchestrator | 17:24:09.760 STDOUT terraform:  + volume_type = "ssd" 2025-07-04 17:24:09.760228 | orchestrator | 17:24:09.760 STDOUT terraform:  } 2025-07-04 17:24:09.760290 | orchestrator | 17:24:09.760 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-04 17:24:09.760351 | orchestrator | 17:24:09.760 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-04 17:24:09.760397 | orchestrator | 17:24:09.760 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-04 17:24:09.760447 | orchestrator | 17:24:09.760 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-04 17:24:09.760493 | orchestrator | 17:24:09.760 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-04 17:24:09.760557 | orchestrator | 17:24:09.760 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.760589 | orchestrator | 17:24:09.760 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.760650 | orchestrator | 17:24:09.760 STDOUT terraform:  + config_drive = true 2025-07-04 17:24:09.760681 | orchestrator | 17:24:09.760 STDOUT terraform:  + created = (known after apply) 2025-07-04 17:24:09.760729 | orchestrator | 17:24:09.760 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-04 17:24:09.760770 | orchestrator | 17:24:09.760 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-04 17:24:09.760802 | orchestrator | 17:24:09.760 STDOUT terraform:  + force_delete = false 2025-07-04 17:24:09.760847 | orchestrator | 17:24:09.760 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-04 17:24:09.760897 | orchestrator | 17:24:09.760 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.760945 | orchestrator | 17:24:09.760 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.760993 | orchestrator | 17:24:09.760 STDOUT terraform:  + image_name = (known after apply) 2025-07-04 17:24:09.761028 | orchestrator | 17:24:09.760 STDOUT terraform:  + key_pair = "testbed" 2025-07-04 17:24:09.761071 | orchestrator | 17:24:09.761 STDOUT terraform:  + name = "testbed-manager" 2025-07-04 17:24:09.761104 | orchestrator | 17:24:09.761 STDOUT terraform:  + power_state = "active" 2025-07-04 17:24:09.761154 | orchestrator | 17:24:09.761 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.761201 | orchestrator | 17:24:09.761 STDOUT terraform:  + security_groups = (known after apply) 2025-07-04 17:24:09.761233 | orchestrator | 17:24:09.761 STDOUT terraform:  + stop_before_destroy = false 2025-07-04 17:24:09.761282 | orchestrator | 17:24:09.761 STDOUT terraform:  + updated = (known after apply) 2025-07-04 17:24:09.761324 | orchestrator | 17:24:09.761 STDOUT terraform:  + user_data = (sensitive value) 2025-07-04 17:24:09.761347 | orchestrator | 17:24:09.761 STDOUT terraform:  + block_device { 2025-07-04 17:24:09.761381 | orchestrator | 17:24:09.761 STDOUT terraform:  + boot_index = 0 2025-07-04 17:24:09.761419 | orchestrator | 17:24:09.761 STDOUT terraform:  + delete_on_termination = false 2025-07-04 17:24:09.761463 | orchestrator | 17:24:09.761 STDOUT terraform:  + destination_type = "volume" 2025-07-04 17:24:09.761498 | orchestrator | 17:24:09.761 STDOUT terraform:  + multiattach = false 2025-07-04 17:24:09.761537 | orchestrator | 17:24:09.761 STDOUT terraform:  + source_type = "volume" 2025-07-04 17:24:09.761594 | orchestrator | 17:24:09.761 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.761627 | orchestrator | 17:24:09.761 STDOUT terraform:  } 2025-07-04 17:24:09.761641 | orchestrator | 17:24:09.761 STDOUT terraform:  + network { 2025-07-04 17:24:09.761671 | orchestrator | 17:24:09.761 STDOUT terraform:  + access_network = false 2025-07-04 17:24:09.761711 | orchestrator | 17:24:09.761 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-04 17:24:09.761755 | orchestrator | 17:24:09.761 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-04 17:24:09.761794 | orchestrator | 17:24:09.761 STDOUT terraform:  + mac = (known after apply) 2025-07-04 17:24:09.761837 | orchestrator | 17:24:09.761 STDOUT terraform:  + name = (known after apply) 2025-07-04 17:24:09.761880 | orchestrator | 17:24:09.761 STDOUT terraform:  + port = (known after apply) 2025-07-04 17:24:09.761923 | orchestrator | 17:24:09.761 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.761933 | orchestrator | 17:24:09.761 STDOUT terraform:  } 2025-07-04 17:24:09.761956 | orchestrator | 17:24:09.761 STDOUT terraform:  } 2025-07-04 17:24:09.762033 | orchestrator | 17:24:09.761 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-04 17:24:09.763878 | orchestrator | 17:24:09.762 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-04 17:24:09.763912 | orchestrator | 17:24:09.762 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-04 17:24:09.763919 | orchestrator | 17:24:09.762 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-04 17:24:09.763925 | orchestrator | 17:24:09.762 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-04 17:24:09.763931 | orchestrator | 17:24:09.762 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.763938 | orchestrator | 17:24:09.762 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.763944 | orchestrator | 17:24:09.762 STDOUT terraform:  + config_drive = true 2025-07-04 17:24:09.763950 | orchestrator | 17:24:09.762 STDOUT terraform:  + created = (known after apply) 2025-07-04 17:24:09.763956 | orchestrator | 17:24:09.762 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-04 17:24:09.763962 | orchestrator | 17:24:09.762 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-04 17:24:09.763969 | orchestrator | 17:24:09.762 STDOUT terraform:  + force_delete = false 2025-07-04 17:24:09.763975 | orchestrator | 17:24:09.762 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-04 17:24:09.763981 | orchestrator | 17:24:09.762 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.763987 | orchestrator | 17:24:09.762 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.763994 | orchestrator | 17:24:09.762 STDOUT terraform:  + image_name = (known after apply) 2025-07-04 17:24:09.764000 | orchestrator | 17:24:09.762 STDOUT terraform:  + key_pair = "testbed" 2025-07-04 17:24:09.764006 | orchestrator | 17:24:09.762 STDOUT terraform:  + name = "testbed-node-0" 2025-07-04 17:24:09.764012 | orchestrator | 17:24:09.762 STDOUT terraform:  + power_state = "active" 2025-07-04 17:24:09.764018 | orchestrator | 17:24:09.762 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.764024 | orchestrator | 17:24:09.762 STDOUT terraform:  + security_groups = (known after apply) 2025-07-04 17:24:09.764031 | orchestrator | 17:24:09.762 STDOUT terraform:  + stop_before_destroy = false 2025-07-04 17:24:09.764037 | orchestrator | 17:24:09.762 STDOUT terraform:  + updated = (known after apply) 2025-07-04 17:24:09.764043 | orchestrator | 17:24:09.763 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-04 17:24:09.764049 | orchestrator | 17:24:09.763 STDOUT terraform:  + block_device { 2025-07-04 17:24:09.764065 | orchestrator | 17:24:09.763 STDOUT terraform:  + boot_index = 0 2025-07-04 17:24:09.764071 | orchestrator | 17:24:09.763 STDOUT terraform:  + delete_on_termination = false 2025-07-04 17:24:09.764077 | orchestrator | 17:24:09.763 STDOUT terraform:  + destination_type = "volume" 2025-07-04 17:24:09.764084 | orchestrator | 17:24:09.763 STDOUT terraform:  + multiattach = false 2025-07-04 17:24:09.764090 | orchestrator | 17:24:09.763 STDOUT terraform:  + source_type = "volume" 2025-07-04 17:24:09.764096 | orchestrator | 17:24:09.763 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.764102 | orchestrator | 17:24:09.763 STDOUT terraform:  } 2025-07-04 17:24:09.764108 | orchestrator | 17:24:09.763 STDOUT terraform:  + network { 2025-07-04 17:24:09.764115 | orchestrator | 17:24:09.763 STDOUT terraform:  + access_network = false 2025-07-04 17:24:09.764125 | orchestrator | 17:24:09.763 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-04 17:24:09.764132 | orchestrator | 17:24:09.763 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-04 17:24:09.764138 | orchestrator | 17:24:09.763 STDOUT terraform:  + mac = (known after apply) 2025-07-04 17:24:09.764144 | orchestrator | 17:24:09.763 STDOUT terraform:  + name = (known after apply) 2025-07-04 17:24:09.764163 | orchestrator | 17:24:09.763 STDOUT terraform:  + port = (known after apply) 2025-07-04 17:24:09.764174 | orchestrator | 17:24:09.763 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.764184 | orchestrator | 17:24:09.763 STDOUT terraform:  } 2025-07-04 17:24:09.764193 | orchestrator | 17:24:09.763 STDOUT terraform:  } 2025-07-04 17:24:09.764203 | orchestrator | 17:24:09.763 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-04 17:24:09.764213 | orchestrator | 17:24:09.763 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-04 17:24:09.764223 | orchestrator | 17:24:09.763 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-04 17:24:09.764231 | orchestrator | 17:24:09.763 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-04 17:24:09.764240 | orchestrator | 17:24:09.763 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-04 17:24:09.764249 | orchestrator | 17:24:09.763 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.764258 | orchestrator | 17:24:09.763 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.764267 | orchestrator | 17:24:09.763 STDOUT terraform:  + config_drive = true 2025-07-04 17:24:09.764276 | orchestrator | 17:24:09.763 STDOUT terraform:  + created = (known after apply) 2025-07-04 17:24:09.764286 | orchestrator | 17:24:09.763 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-04 17:24:09.764296 | orchestrator | 17:24:09.763 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-04 17:24:09.764305 | orchestrator | 17:24:09.763 STDOUT terraform:  + force_delete = false 2025-07-04 17:24:09.764315 | orchestrator | 17:24:09.763 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-04 17:24:09.764333 | orchestrator | 17:24:09.764 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.764343 | orchestrator | 17:24:09.764 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.764354 | orchestrator | 17:24:09.764 STDOUT terraform:  + image_name = (known after apply) 2025-07-04 17:24:09.764363 | orchestrator | 17:24:09.764 STDOUT terraform:  + key_pair = "testbed" 2025-07-04 17:24:09.764378 | orchestrator | 17:24:09.764 STDOUT terraform:  + name = "testbed-node-1" 2025-07-04 17:24:09.764388 | orchestrator | 17:24:09.764 STDOUT terraform:  + power_state = "active" 2025-07-04 17:24:09.764399 | orchestrator | 17:24:09.764 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.764408 | orchestrator | 17:24:09.764 STDOUT terraform:  + security_groups = (known after apply) 2025-07-04 17:24:09.764418 | orchestrator | 17:24:09.764 STDOUT terraform:  + stop_before_destroy = false 2025-07-04 17:24:09.764425 | orchestrator | 17:24:09.764 STDOUT terraform:  + updated = (known after apply) 2025-07-04 17:24:09.764434 | orchestrator | 17:24:09.764 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-04 17:24:09.764440 | orchestrator | 17:24:09.764 STDOUT terraform:  + block_device { 2025-07-04 17:24:09.764446 | orchestrator | 17:24:09.764 STDOUT terraform:  + boot_index = 0 2025-07-04 17:24:09.764455 | orchestrator | 17:24:09.764 STDOUT terraform:  + delete_on_termination = false 2025-07-04 17:24:09.764499 | orchestrator | 17:24:09.764 STDOUT terraform:  + destination_type = "volume" 2025-07-04 17:24:09.764529 | orchestrator | 17:24:09.764 STDOUT terraform:  + multiattach = false 2025-07-04 17:24:09.764564 | orchestrator | 17:24:09.764 STDOUT terraform:  + source_type = "volume" 2025-07-04 17:24:09.764640 | orchestrator | 17:24:09.764 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.764649 | orchestrator | 17:24:09.764 STDOUT terraform:  } 2025-07-04 17:24:09.764658 | orchestrator | 17:24:09.764 STDOUT terraform:  + network { 2025-07-04 17:24:09.764667 | orchestrator | 17:24:09.764 STDOUT terraform:  + access_network = false 2025-07-04 17:24:09.764702 | orchestrator | 17:24:09.764 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-04 17:24:09.764739 | orchestrator | 17:24:09.764 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-04 17:24:09.764917 | orchestrator | 17:24:09.764 STDOUT terraform:  + mac = (known after apply) 2025-07-04 17:24:09.764991 | orchestrator | 17:24:09.764 STDOUT terraform:  + name = (known after apply) 2025-07-04 17:24:09.765005 | orchestrator | 17:24:09.764 STDOUT terraform:  + port = (known after apply) 2025-07-04 17:24:09.765026 | orchestrator | 17:24:09.764 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.765036 | orchestrator | 17:24:09.764 STDOUT terraform:  } 2025-07-04 17:24:09.765048 | orchestrator | 17:24:09.764 STDOUT terraform:  } 2025-07-04 17:24:09.765058 | orchestrator | 17:24:09.764 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-04 17:24:09.765069 | orchestrator | 17:24:09.764 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-04 17:24:09.765100 | orchestrator | 17:24:09.765 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-04 17:24:09.765110 | orchestrator | 17:24:09.765 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-04 17:24:09.765123 | orchestrator | 17:24:09.765 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-04 17:24:09.765177 | orchestrator | 17:24:09.765 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.765194 | orchestrator | 17:24:09.765 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.765207 | orchestrator | 17:24:09.765 STDOUT terraform:  + config_drive = true 2025-07-04 17:24:09.765255 | orchestrator | 17:24:09.765 STDOUT terraform:  + created = (known after apply) 2025-07-04 17:24:09.765296 | orchestrator | 17:24:09.765 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-04 17:24:09.765332 | orchestrator | 17:24:09.765 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-04 17:24:09.765367 | orchestrator | 17:24:09.765 STDOUT terraform:  + force_delete = false 2025-07-04 17:24:09.765408 | orchestrator | 17:24:09.765 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-04 17:24:09.765451 | orchestrator | 17:24:09.765 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.765494 | orchestrator | 17:24:09.765 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.765538 | orchestrator | 17:24:09.765 STDOUT terraform:  + image_name = (known after apply) 2025-07-04 17:24:09.765562 | orchestrator | 17:24:09.765 STDOUT terraform:  + key_pair = "testbed" 2025-07-04 17:24:09.765603 | orchestrator | 17:24:09.765 STDOUT terraform:  + name = "testbed-node-2" 2025-07-04 17:24:09.765636 | orchestrator | 17:24:09.765 STDOUT terraform:  + power_state = "active" 2025-07-04 17:24:09.765680 | orchestrator | 17:24:09.765 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.765721 | orchestrator | 17:24:09.765 STDOUT terraform:  + security_groups = (known after apply) 2025-07-04 17:24:09.765736 | orchestrator | 17:24:09.765 STDOUT terraform:  + stop_before_destroy = false 2025-07-04 17:24:09.765794 | orchestrator | 17:24:09.765 STDOUT terraform:  + updated = (known after apply) 2025-07-04 17:24:09.765851 | orchestrator | 17:24:09.765 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-04 17:24:09.765866 | orchestrator | 17:24:09.765 STDOUT terraform:  + block_device { 2025-07-04 17:24:09.765900 | orchestrator | 17:24:09.765 STDOUT terraform:  + boot_index = 0 2025-07-04 17:24:09.765915 | orchestrator | 17:24:09.765 STDOUT terraform:  + delete_on_termination = false 2025-07-04 17:24:09.765960 | orchestrator | 17:24:09.765 STDOUT terraform:  + destination_type = "volume" 2025-07-04 17:24:09.766005 | orchestrator | 17:24:09.765 STDOUT terraform:  + multiattach = false 2025-07-04 17:24:09.766573 | orchestrator | 17:24:09.765 STDOUT terraform:  + source_type = "volume" 2025-07-04 17:24:09.766598 | orchestrator | 17:24:09.766 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.766659 | orchestrator | 17:24:09.766 STDOUT terraform:  } 2025-07-04 17:24:09.766671 | orchestrator | 17:24:09.766 STDOUT terraform:  + network { 2025-07-04 17:24:09.766680 | orchestrator | 17:24:09.766 STDOUT terraform:  + access_network = false 2025-07-04 17:24:09.766690 | orchestrator | 17:24:09.766 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-04 17:24:09.766700 | orchestrator | 17:24:09.766 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-04 17:24:09.766710 | orchestrator | 17:24:09.766 STDOUT terraform:  + mac = (known after apply) 2025-07-04 17:24:09.766720 | orchestrator | 17:24:09.766 STDOUT terraform:  + name = (known after apply) 2025-07-04 17:24:09.766729 | orchestrator | 17:24:09.766 STDOUT terraform:  + port = (known after apply) 2025-07-04 17:24:09.766739 | orchestrator | 17:24:09.766 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.766748 | orchestrator | 17:24:09.766 STDOUT terraform:  } 2025-07-04 17:24:09.766762 | orchestrator | 17:24:09.766 STDOUT terraform:  } 2025-07-04 17:24:09.766778 | orchestrator | 17:24:09.766 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-04 17:24:09.766789 | orchestrator | 17:24:09.766 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-04 17:24:09.766799 | orchestrator | 17:24:09.766 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-04 17:24:09.766809 | orchestrator | 17:24:09.766 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-04 17:24:09.766819 | orchestrator | 17:24:09.766 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-04 17:24:09.766832 | orchestrator | 17:24:09.766 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.766842 | orchestrator | 17:24:09.766 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.766855 | orchestrator | 17:24:09.766 STDOUT terraform:  + config_drive = true 2025-07-04 17:24:09.766895 | orchestrator | 17:24:09.766 STDOUT terraform:  + created = (known after apply) 2025-07-04 17:24:09.766934 | orchestrator | 17:24:09.766 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-04 17:24:09.766969 | orchestrator | 17:24:09.766 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-04 17:24:09.767003 | orchestrator | 17:24:09.766 STDOUT terraform:  + force_delete = false 2025-07-04 17:24:09.767063 | orchestrator | 17:24:09.766 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-04 17:24:09.767152 | orchestrator | 17:24:09.767 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.767237 | orchestrator | 17:24:09.767 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.767328 | orchestrator | 17:24:09.767 STDOUT terraform:  + image_name = (known after apply) 2025-07-04 17:24:09.767392 | orchestrator | 17:24:09.767 STDOUT terraform:  + key_pair = "testbed" 2025-07-04 17:24:09.767465 | orchestrator | 17:24:09.767 STDOUT terraform:  + name = "testbed-node-3" 2025-07-04 17:24:09.767525 | orchestrator | 17:24:09.767 STDOUT terraform:  + power_state = "active" 2025-07-04 17:24:09.767633 | orchestrator | 17:24:09.767 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.767713 | orchestrator | 17:24:09.767 STDOUT terraform:  + security_groups = (known after apply) 2025-07-04 17:24:09.767738 | orchestrator | 17:24:09.767 STDOUT terraform:  + stop_before_destroy = false 2025-07-04 17:24:09.767814 | orchestrator | 17:24:09.767 STDOUT terraform:  + updated = (known after apply) 2025-07-04 17:24:09.767871 | orchestrator | 17:24:09.767 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-04 17:24:09.767895 | orchestrator | 17:24:09.767 STDOUT terraform:  + block_device { 2025-07-04 17:24:09.767908 | orchestrator | 17:24:09.767 STDOUT terraform:  + boot_index = 0 2025-07-04 17:24:09.767946 | orchestrator | 17:24:09.767 STDOUT terraform:  + delete_on_termination = false 2025-07-04 17:24:09.767980 | orchestrator | 17:24:09.767 STDOUT terraform:  + destination_type = "volume" 2025-07-04 17:24:09.768013 | orchestrator | 17:24:09.767 STDOUT terraform:  + multiattach = false 2025-07-04 17:24:09.768050 | orchestrator | 17:24:09.768 STDOUT terraform:  + source_type = "volume" 2025-07-04 17:24:09.768104 | orchestrator | 17:24:09.768 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.768129 | orchestrator | 17:24:09.768 STDOUT terraform:  } 2025-07-04 17:24:09.768154 | orchestrator | 17:24:09.768 STDOUT terraform:  + network { 2025-07-04 17:24:09.768174 | orchestrator | 17:24:09.768 STDOUT terraform:  + access_network = false 2025-07-04 17:24:09.768199 | orchestrator | 17:24:09.768 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-04 17:24:09.768221 | orchestrator | 17:24:09.768 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-04 17:24:09.768245 | orchestrator | 17:24:09.768 STDOUT terraform:  + mac = (known after apply) 2025-07-04 17:24:09.768265 | orchestrator | 17:24:09.768 STDOUT terraform:  + name = (known after apply) 2025-07-04 17:24:09.768307 | orchestrator | 17:24:09.768 STDOUT terraform:  + port = (known after apply) 2025-07-04 17:24:09.768323 | orchestrator | 17:24:09.768 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.768337 | orchestrator | 17:24:09.768 STDOUT terraform:  } 2025-07-04 17:24:09.768352 | orchestrator | 17:24:09.768 STDOUT terraform:  } 2025-07-04 17:24:09.768423 | orchestrator | 17:24:09.768 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-04 17:24:09.768491 | orchestrator | 17:24:09.768 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-04 17:24:09.768547 | orchestrator | 17:24:09.768 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-04 17:24:09.768588 | orchestrator | 17:24:09.768 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-04 17:24:09.768646 | orchestrator | 17:24:09.768 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-04 17:24:09.768686 | orchestrator | 17:24:09.768 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.768703 | orchestrator | 17:24:09.768 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.768728 | orchestrator | 17:24:09.768 STDOUT terraform:  + config_drive = true 2025-07-04 17:24:09.768772 | orchestrator | 17:24:09.768 STDOUT terraform:  + created = (known after apply) 2025-07-04 17:24:09.768883 | orchestrator | 17:24:09.768 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-04 17:24:09.768899 | orchestrator | 17:24:09.768 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-04 17:24:09.768914 | orchestrator | 17:24:09.768 STDOUT terraform:  + force_delete = false 2025-07-04 17:24:09.768928 | orchestrator | 17:24:09.768 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-04 17:24:09.768980 | orchestrator | 17:24:09.768 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.769022 | orchestrator | 17:24:09.768 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.769064 | orchestrator | 17:24:09.769 STDOUT terraform:  + image_name = (known after apply) 2025-07-04 17:24:09.769080 | orchestrator | 17:24:09.769 STDOUT terraform:  + key_pair = "testbed" 2025-07-04 17:24:09.772147 | orchestrator | 17:24:09.769 STDOUT terraform:  + name = "testbed-node-4" 2025-07-04 17:24:09.772195 | orchestrator | 17:24:09.769 STDOUT terraform:  + power_state = "active" 2025-07-04 17:24:09.772207 | orchestrator | 17:24:09.769 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.772218 | orchestrator | 17:24:09.769 STDOUT terraform:  + security_groups = (known after apply) 2025-07-04 17:24:09.772229 | orchestrator | 17:24:09.769 STDOUT terraform:  + stop_before_destroy = false 2025-07-04 17:24:09.772240 | orchestrator | 17:24:09.769 STDOUT terraform:  + updated = (known after apply) 2025-07-04 17:24:09.772251 | orchestrator | 17:24:09.769 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-04 17:24:09.772263 | orchestrator | 17:24:09.769 STDOUT terraform:  + block_device { 2025-07-04 17:24:09.772274 | orchestrator | 17:24:09.769 STDOUT terraform:  + boot_index = 0 2025-07-04 17:24:09.772285 | orchestrator | 17:24:09.769 STDOUT terraform:  + delete_on_termination = false 2025-07-04 17:24:09.772296 | orchestrator | 17:24:09.769 STDOUT terraform:  + destination_type = "volume" 2025-07-04 17:24:09.772307 | orchestrator | 17:24:09.769 STDOUT terraform:  + multiattach = false 2025-07-04 17:24:09.772317 | orchestrator | 17:24:09.769 STDOUT terraform:  + source_type = "volume" 2025-07-04 17:24:09.772328 | orchestrator | 17:24:09.769 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.772339 | orchestrator | 17:24:09.769 STDOUT terraform:  } 2025-07-04 17:24:09.772350 | orchestrator | 17:24:09.769 STDOUT terraform:  + network { 2025-07-04 17:24:09.772361 | orchestrator | 17:24:09.769 STDOUT terraform:  + access_network = false 2025-07-04 17:24:09.772372 | orchestrator | 17:24:09.769 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-04 17:24:09.772383 | orchestrator | 17:24:09.769 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-04 17:24:09.772393 | orchestrator | 17:24:09.769 STDOUT terraform:  + mac = (known after apply) 2025-07-04 17:24:09.772419 | orchestrator | 17:24:09.769 STDOUT terraform:  + name = (known after apply) 2025-07-04 17:24:09.772430 | orchestrator | 17:24:09.769 STDOUT terraform:  + port = (known after apply) 2025-07-04 17:24:09.772441 | orchestrator | 17:24:09.769 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.772452 | orchestrator | 17:24:09.769 STDOUT terraform:  } 2025-07-04 17:24:09.772463 | orchestrator | 17:24:09.769 STDOUT terraform:  } 2025-07-04 17:24:09.772473 | orchestrator | 17:24:09.769 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-04 17:24:09.772485 | orchestrator | 17:24:09.769 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-04 17:24:09.772496 | orchestrator | 17:24:09.770 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-04 17:24:09.772507 | orchestrator | 17:24:09.770 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-04 17:24:09.772518 | orchestrator | 17:24:09.770 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-04 17:24:09.772529 | orchestrator | 17:24:09.770 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.772539 | orchestrator | 17:24:09.770 STDOUT terraform:  + availability_zone = "nova" 2025-07-04 17:24:09.772550 | orchestrator | 17:24:09.770 STDOUT terraform:  + config_drive = true 2025-07-04 17:24:09.772561 | orchestrator | 17:24:09.770 STDOUT terraform:  + created = (known after apply) 2025-07-04 17:24:09.772572 | orchestrator | 17:24:09.770 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-04 17:24:09.772590 | orchestrator | 17:24:09.770 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-04 17:24:09.772662 | orchestrator | 17:24:09.770 STDOUT terraform:  + force_delete = false 2025-07-04 17:24:09.772676 | orchestrator | 17:24:09.770 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-04 17:24:09.772700 | orchestrator | 17:24:09.770 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.772711 | orchestrator | 17:24:09.770 STDOUT terraform:  + image_id = (known after apply) 2025-07-04 17:24:09.772722 | orchestrator | 17:24:09.770 STDOUT terraform:  + image_name = (known after apply) 2025-07-04 17:24:09.772732 | orchestrator | 17:24:09.770 STDOUT terraform:  + key_pair = "testbed" 2025-07-04 17:24:09.772743 | orchestrator | 17:24:09.770 STDOUT terraform:  + name = "testbed-node-5" 2025-07-04 17:24:09.772754 | orchestrator | 17:24:09.770 STDOUT terraform:  + power_state = "active" 2025-07-04 17:24:09.772765 | orchestrator | 17:24:09.770 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.772775 | orchestrator | 17:24:09.770 STDOUT terraform:  + security_groups = (known after apply) 2025-07-04 17:24:09.772786 | orchestrator | 17:24:09.770 STDOUT terraform:  + stop_before_destroy = false 2025-07-04 17:24:09.772797 | orchestrator | 17:24:09.770 STDOUT terraform:  + updated = (known after apply) 2025-07-04 17:24:09.772808 | orchestrator | 17:24:09.770 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-04 17:24:09.772818 | orchestrator | 17:24:09.770 STDOUT terraform:  + block_device { 2025-07-04 17:24:09.772836 | orchestrator | 17:24:09.770 STDOUT terraform:  + boot_index = 0 2025-07-04 17:24:09.772847 | orchestrator | 17:24:09.770 STDOUT terraform:  + delete_on_termination = false 2025-07-04 17:24:09.772858 | orchestrator | 17:24:09.770 STDOUT terraform:  + destination_type = "volume" 2025-07-04 17:24:09.772869 | orchestrator | 17:24:09.770 STDOUT terraform:  + multiattach = false 2025-07-04 17:24:09.772880 | orchestrator | 17:24:09.770 STDOUT terraform:  + source_type = "volume" 2025-07-04 17:24:09.772891 | orchestrator | 17:24:09.770 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.772901 | orchestrator | 17:24:09.770 STDOUT terraform:  } 2025-07-04 17:24:09.772912 | orchestrator | 17:24:09.770 STDOUT terraform:  + network { 2025-07-04 17:24:09.772923 | orchestrator | 17:24:09.770 STDOUT terraform:  + access_network = false 2025-07-04 17:24:09.772934 | orchestrator | 17:24:09.770 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-04 17:24:09.772945 | orchestrator | 17:24:09.770 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-04 17:24:09.772955 | orchestrator | 17:24:09.770 STDOUT terraform:  + mac = (known after apply) 2025-07-04 17:24:09.772966 | orchestrator | 17:24:09.770 STDOUT terraform:  + name = (known after apply) 2025-07-04 17:24:09.772977 | orchestrator | 17:24:09.771 STDOUT terraform:  + port = (known after apply) 2025-07-04 17:24:09.772987 | orchestrator | 17:24:09.771 STDOUT terraform:  + uuid = (known after apply) 2025-07-04 17:24:09.772998 | orchestrator | 17:24:09.771 STDOUT terraform:  } 2025-07-04 17:24:09.773009 | orchestrator | 17:24:09.771 STDOUT terraform:  } 2025-07-04 17:24:09.773019 | orchestrator | 17:24:09.771 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-04 17:24:09.773030 | orchestrator | 17:24:09.771 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-04 17:24:09.773041 | orchestrator | 17:24:09.771 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-04 17:24:09.773051 | orchestrator | 17:24:09.771 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.773062 | orchestrator | 17:24:09.771 STDOUT terraform:  + name = "testbed" 2025-07-04 17:24:09.773073 | orchestrator | 17:24:09.771 STDOUT terraform:  + private_key = (sensitive value) 2025-07-04 17:24:09.773084 | orchestrator | 17:24:09.771 STDOUT terraform:  + public_key = (known after apply) 2025-07-04 17:24:09.773094 | orchestrator | 17:24:09.771 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.773110 | orchestrator | 17:24:09.771 STDOUT terraform:  + user_id = (known after apply) 2025-07-04 17:24:09.773121 | orchestrator | 17:24:09.771 STDOUT terraform:  } 2025-07-04 17:24:09.773139 | orchestrator | 17:24:09.771 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-04 17:24:09.773151 | orchestrator | 17:24:09.771 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-04 17:24:09.773161 | orchestrator | 17:24:09.771 STDOUT terraform:  + device = (known after apply) 2025-07-04 17:24:09.773171 | orchestrator | 17:24:09.771 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.773186 | orchestrator | 17:24:09.771 STDOUT terraform:  + instance_id = (known after apply) 2025-07-04 17:24:09.773196 | orchestrator | 17:24:09.771 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.773205 | orchestrator | 17:24:09.771 STDOUT terraform:  + volume_id = (known after apply) 2025-07-04 17:24:09.773215 | orchestrator | 17:24:09.771 STDOUT terraform:  } 2025-07-04 17:24:09.773224 | orchestrator | 17:24:09.771 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-04 17:24:09.773234 | orchestrator | 17:24:09.771 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-04 17:24:09.773243 | orchestrator | 17:24:09.771 STDOUT terraform:  + device = (known after apply) 2025-07-04 17:24:09.773253 | orchestrator | 17:24:09.771 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.773262 | orchestrator | 17:24:09.771 STDOUT terraform:  + instance_id = (known after apply) 2025-07-04 17:24:09.773272 | orchestrator | 17:24:09.771 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.773281 | orchestrator | 17:24:09.771 STDOUT terraform:  + volume_id = (known after apply) 2025-07-04 17:24:09.773291 | orchestrator | 17:24:09.771 STDOUT terraform:  } 2025-07-04 17:24:09.773300 | orchestrator | 17:24:09.771 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-04 17:24:09.773310 | orchestrator | 17:24:09.771 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-04 17:24:09.773319 | orchestrator | 17:24:09.771 STDOUT terraform:  + device = (known after apply) 2025-07-04 17:24:09.773329 | orchestrator | 17:24:09.771 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.773338 | orchestrator | 17:24:09.771 STDOUT terraform:  + instance_id = (known after apply) 2025-07-04 17:24:09.773348 | orchestrator | 17:24:09.771 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.773357 | orchestrator | 17:24:09.771 STDOUT terraform:  + volume_id = (known after apply) 2025-07-04 17:24:09.773367 | orchestrator | 17:24:09.772 STDOUT terraform:  } 2025-07-04 17:24:09.773376 | orchestrator | 17:24:09.772 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-04 17:24:09.773386 | orchestrator | 17:24:09.772 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-04 17:24:09.773395 | orchestrator | 17:24:09.772 STDOUT terraform:  + device = (known after apply) 2025-07-04 17:24:09.773405 | orchestrator | 17:24:09.772 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.773414 | orchestrator | 17:24:09.772 STDOUT terraform:  + instance_id = (known after apply) 2025-07-04 17:24:09.773424 | orchestrator | 17:24:09.772 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.773433 | orchestrator | 17:24:09.772 STDOUT terraform:  + volume_id = (known after apply) 2025-07-04 17:24:09.773443 | orchestrator | 17:24:09.772 STDOUT terraform:  } 2025-07-04 17:24:09.773452 | orchestrator | 17:24:09.772 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-04 17:24:09.773468 | orchestrator | 17:24:09.772 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-04 17:24:09.773477 | orchestrator | 17:24:09.772 STDOUT terraform:  + device = (known after apply) 2025-07-04 17:24:09.773491 | orchestrator | 17:24:09.772 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.773507 | orchestrator | 17:24:09.772 STDOUT terraform:  + instance_id = (known after apply) 2025-07-04 17:24:09.773517 | orchestrator | 17:24:09.772 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.773527 | orchestrator | 17:24:09.772 STDOUT terraform:  + volume_id = (known after apply) 2025-07-04 17:24:09.773536 | orchestrator | 17:24:09.772 STDOUT terraform:  } 2025-07-04 17:24:09.773546 | orchestrator | 17:24:09.772 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-04 17:24:09.773556 | orchestrator | 17:24:09.772 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-04 17:24:09.773565 | orchestrator | 17:24:09.772 STDOUT terraform:  + device = (known after apply) 2025-07-04 17:24:09.773575 | orchestrator | 17:24:09.772 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.773584 | orchestrator | 17:24:09.772 STDOUT terraform:  + instance_id = (known after apply) 2025-07-04 17:24:09.773594 | orchestrator | 17:24:09.772 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.773603 | orchestrator | 17:24:09.772 STDOUT terraform:  + volume_id = (known after apply) 2025-07-04 17:24:09.773657 | orchestrator | 17:24:09.772 STDOUT terraform:  } 2025-07-04 17:24:09.773667 | orchestrator | 17:24:09.772 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-04 17:24:09.773677 | orchestrator | 17:24:09.772 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-04 17:24:09.773687 | orchestrator | 17:24:09.772 STDOUT terraform:  + device = (known after apply) 2025-07-04 17:24:09.773696 | orchestrator | 17:24:09.772 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.773706 | orchestrator | 17:24:09.772 STDOUT terraform:  + instance_id = (known after apply) 2025-07-04 17:24:09.773715 | orchestrator | 17:24:09.772 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.773725 | orchestrator | 17:24:09.772 STDOUT terraform:  + volume_id = (known after apply) 2025-07-04 17:24:09.773734 | orchestrator | 17:24:09.772 STDOUT terraform:  } 2025-07-04 17:24:09.773744 | orchestrator | 17:24:09.772 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-04 17:24:09.773754 | orchestrator | 17:24:09.772 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-04 17:24:09.773763 | orchestrator | 17:24:09.773 STDOUT terraform:  + device = (known after apply) 2025-07-04 17:24:09.773773 | orchestrator | 17:24:09.773 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.773782 | orchestrator | 17:24:09.773 STDOUT terraform:  + instance_id = (known after apply) 2025-07-04 17:24:09.773792 | orchestrator | 17:24:09.773 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.773807 | orchestrator | 17:24:09.773 STDOUT terraform:  + volume_id = (known after apply) 2025-07-04 17:24:09.773816 | orchestrator | 17:24:09.773 STDOUT terraform:  } 2025-07-04 17:24:09.773826 | orchestrator | 17:24:09.773 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-04 17:24:09.773836 | orchestrator | 17:24:09.773 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-04 17:24:09.773845 | orchestrator | 17:24:09.773 STDOUT terraform:  + device = (known after apply) 2025-07-04 17:24:09.773855 | orchestrator | 17:24:09.773 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.773864 | orchestrator | 17:24:09.773 STDOUT terraform:  + instance_id = (known after apply) 2025-07-04 17:24:09.773874 | orchestrator | 17:24:09.773 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.773883 | orchestrator | 17:24:09.773 STDOUT terraform:  + volume_id = (known after apply) 2025-07-04 17:24:09.773893 | orchestrator | 17:24:09.773 STDOUT terraform:  } 2025-07-04 17:24:09.773907 | orchestrator | 17:24:09.773 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-04 17:24:09.773921 | orchestrator | 17:24:09.773 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-04 17:24:09.773931 | orchestrator | 17:24:09.773 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-04 17:24:09.773941 | orchestrator | 17:24:09.773 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-04 17:24:09.773950 | orchestrator | 17:24:09.773 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.773960 | orchestrator | 17:24:09.773 STDOUT terraform:  + port_id = (known after apply) 2025-07-04 17:24:09.773969 | orchestrator | 17:24:09.773 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.773979 | orchestrator | 17:24:09.773 STDOUT terraform:  } 2025-07-04 17:24:09.773988 | orchestrator | 17:24:09.773 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-04 17:24:09.773998 | orchestrator | 17:24:09.773 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-04 17:24:09.774008 | orchestrator | 17:24:09.773 STDOUT terraform:  + address = (known after apply) 2025-07-04 17:24:09.774066 | orchestrator | 17:24:09.773 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.774076 | orchestrator | 17:24:09.773 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-04 17:24:09.774085 | orchestrator | 17:24:09.773 STDOUT terraform:  + dns_name = (known after apply) 2025-07-04 17:24:09.774095 | orchestrator | 17:24:09.773 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-04 17:24:09.774105 | orchestrator | 17:24:09.773 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.774114 | orchestrator | 17:24:09.773 STDOUT terraform:  + pool = "public" 2025-07-04 17:24:09.774124 | orchestrator | 17:24:09.773 STDOUT terraform:  + port_id = (known after apply) 2025-07-04 17:24:09.774134 | orchestrator | 17:24:09.773 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.774147 | orchestrator | 17:24:09.773 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-04 17:24:09.774164 | orchestrator | 17:24:09.773 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.774174 | orchestrator | 17:24:09.773 STDOUT terraform:  } 2025-07-04 17:24:09.774184 | orchestrator | 17:24:09.773 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-04 17:24:09.774196 | orchestrator | 17:24:09.773 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-04 17:24:09.774266 | orchestrator | 17:24:09.774 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-04 17:24:09.774279 | orchestrator | 17:24:09.774 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.774290 | orchestrator | 17:24:09.774 STDOUT terraform:  + availability_zone_hints = [ 2025-07-04 17:24:09.774300 | orchestrator | 17:24:09.774 STDOUT terraform:  + "nova", 2025-07-04 17:24:09.774310 | orchestrator | 17:24:09.774 STDOUT terraform:  ] 2025-07-04 17:24:09.774351 | orchestrator | 17:24:09.774 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-04 17:24:09.774383 | orchestrator | 17:24:09.774 STDOUT terraform:  + external = (known after apply) 2025-07-04 17:24:09.774421 | orchestrator | 17:24:09.774 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.774463 | orchestrator | 17:24:09.774 STDOUT terraform:  + mtu = (known after apply) 2025-07-04 17:24:09.774498 | orchestrator | 17:24:09.774 STDOUT terraform:  + name = "net-testbed-management" 2025-07-04 17:24:09.774537 | orchestrator | 17:24:09.774 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-04 17:24:09.774573 | orchestrator | 17:24:09.774 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-04 17:24:09.774622 | orchestrator | 17:24:09.774 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.774676 | orchestrator | 17:24:09.774 STDOUT terraform:  + shared = (known after apply) 2025-07-04 17:24:09.775753 | orchestrator | 17:24:09.774 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.775789 | orchestrator | 17:24:09.774 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-04 17:24:09.775795 | orchestrator | 17:24:09.774 STDOUT terraform:  + segments (known after apply) 2025-07-04 17:24:09.775800 | orchestrator | 17:24:09.774 STDOUT terraform:  } 2025-07-04 17:24:09.775804 | orchestrator | 17:24:09.774 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-04 17:24:09.775809 | orchestrator | 17:24:09.774 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-04 17:24:09.775813 | orchestrator | 17:24:09.774 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-04 17:24:09.775817 | orchestrator | 17:24:09.774 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-04 17:24:09.775821 | orchestrator | 17:24:09.774 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-04 17:24:09.775825 | orchestrator | 17:24:09.774 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.775833 | orchestrator | 17:24:09.774 STDOUT terraform:  + device_id = (known after apply) 2025-07-04 17:24:09.775845 | orchestrator | 17:24:09.775 STDOUT terraform:  + device_owner = (known after apply) 2025-07-04 17:24:09.775849 | orchestrator | 17:24:09.775 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-04 17:24:09.775853 | orchestrator | 17:24:09.775 STDOUT terraform:  + dns_name = (known after apply) 2025-07-04 17:24:09.775857 | orchestrator | 17:24:09.775 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.775861 | orchestrator | 17:24:09.775 STDOUT terraform:  + mac_address = (known after apply) 2025-07-04 17:24:09.775865 | orchestrator | 17:24:09.775 STDOUT terraform:  + network_id = (known after apply) 2025-07-04 17:24:09.775869 | orchestrator | 17:24:09.775 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-04 17:24:09.775873 | orchestrator | 17:24:09.775 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-04 17:24:09.775876 | orchestrator | 17:24:09.775 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.775880 | orchestrator | 17:24:09.775 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-04 17:24:09.775884 | orchestrator | 17:24:09.775 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.775888 | orchestrator | 17:24:09.775 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.775892 | orchestrator | 17:24:09.775 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-04 17:24:09.775896 | orchestrator | 17:24:09.775 STDOUT terraform:  } 2025-07-04 17:24:09.775900 | orchestrator | 17:24:09.775 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.775904 | orchestrator | 17:24:09.775 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-04 17:24:09.775907 | orchestrator | 17:24:09.775 STDOUT terraform:  } 2025-07-04 17:24:09.775911 | orchestrator | 17:24:09.775 STDOUT terraform:  + binding (known after apply) 2025-07-04 17:24:09.775915 | orchestrator | 17:24:09.775 STDOUT terraform:  + fixed_ip { 2025-07-04 17:24:09.775919 | orchestrator | 17:24:09.775 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-04 17:24:09.775923 | orchestrator | 17:24:09.775 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-04 17:24:09.775926 | orchestrator | 17:24:09.775 STDOUT terraform:  } 2025-07-04 17:24:09.775930 | orchestrator | 17:24:09.775 STDOUT terraform:  } 2025-07-04 17:24:09.775934 | orchestrator | 17:24:09.775 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-04 17:24:09.775938 | orchestrator | 17:24:09.775 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-04 17:24:09.775944 | orchestrator | 17:24:09.775 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-04 17:24:09.775958 | orchestrator | 17:24:09.775 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-04 17:24:09.775962 | orchestrator | 17:24:09.775 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-04 17:24:09.775966 | orchestrator | 17:24:09.775 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.775969 | orchestrator | 17:24:09.775 STDOUT terraform:  + device_id = (known after apply) 2025-07-04 17:24:09.775976 | orchestrator | 17:24:09.775 STDOUT terraform:  + device_owner = (known after apply) 2025-07-04 17:24:09.775980 | orchestrator | 17:24:09.775 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-04 17:24:09.775984 | orchestrator | 17:24:09.775 STDOUT terraform:  + dns_name = (known after apply) 2025-07-04 17:24:09.775988 | orchestrator | 17:24:09.775 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.775993 | orchestrator | 17:24:09.775 STDOUT terraform:  + mac_address = (known after apply) 2025-07-04 17:24:09.776011 | orchestrator | 17:24:09.775 STDOUT terraform:  + network_id = (known after apply) 2025-07-04 17:24:09.776045 | orchestrator | 17:24:09.776 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-04 17:24:09.776079 | orchestrator | 17:24:09.776 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-04 17:24:09.776113 | orchestrator | 17:24:09.776 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.776153 | orchestrator | 17:24:09.776 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-04 17:24:09.776187 | orchestrator | 17:24:09.776 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.776206 | orchestrator | 17:24:09.776 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.776234 | orchestrator | 17:24:09.776 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-04 17:24:09.776252 | orchestrator | 17:24:09.776 STDOUT terraform:  } 2025-07-04 17:24:09.776272 | orchestrator | 17:24:09.776 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.776301 | orchestrator | 17:24:09.776 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-04 17:24:09.776318 | orchestrator | 17:24:09.776 STDOUT terraform:  } 2025-07-04 17:24:09.776337 | orchestrator | 17:24:09.776 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.776364 | orchestrator | 17:24:09.776 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-04 17:24:09.776370 | orchestrator | 17:24:09.776 STDOUT terraform:  } 2025-07-04 17:24:09.776394 | orchestrator | 17:24:09.776 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.776421 | orchestrator | 17:24:09.776 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-04 17:24:09.776427 | orchestrator | 17:24:09.776 STDOUT terraform:  } 2025-07-04 17:24:09.776454 | orchestrator | 17:24:09.776 STDOUT terraform:  + binding (known after apply) 2025-07-04 17:24:09.776461 | orchestrator | 17:24:09.776 STDOUT terraform:  + fixed_ip { 2025-07-04 17:24:09.776488 | orchestrator | 17:24:09.776 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-04 17:24:09.776516 | orchestrator | 17:24:09.776 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-04 17:24:09.776523 | orchestrator | 17:24:09.776 STDOUT terraform:  } 2025-07-04 17:24:09.776538 | orchestrator | 17:24:09.776 STDOUT terraform:  } 2025-07-04 17:24:09.776587 | orchestrator | 17:24:09.776 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-04 17:24:09.776642 | orchestrator | 17:24:09.776 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-04 17:24:09.776676 | orchestrator | 17:24:09.776 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-04 17:24:09.776712 | orchestrator | 17:24:09.776 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-04 17:24:09.776746 | orchestrator | 17:24:09.776 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-04 17:24:09.776784 | orchestrator | 17:24:09.776 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.776820 | orchestrator | 17:24:09.776 STDOUT terraform:  + device_id = (known after apply) 2025-07-04 17:24:09.776855 | orchestrator | 17:24:09.776 STDOUT terraform:  + device_owner = (known after apply) 2025-07-04 17:24:09.776889 | orchestrator | 17:24:09.776 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-04 17:24:09.776927 | orchestrator | 17:24:09.776 STDOUT terraform:  + dns_name = (known after apply) 2025-07-04 17:24:09.776966 | orchestrator | 17:24:09.776 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.777000 | orchestrator | 17:24:09.776 STDOUT terraform:  + mac_address = (known after apply) 2025-07-04 17:24:09.777034 | orchestrator | 17:24:09.776 STDOUT terraform:  + network_id = (known after apply) 2025-07-04 17:24:09.777072 | orchestrator | 17:24:09.777 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-04 17:24:09.777108 | orchestrator | 17:24:09.777 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-04 17:24:09.777141 | orchestrator | 17:24:09.777 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.777175 | orchestrator | 17:24:09.777 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-04 17:24:09.777212 | orchestrator | 17:24:09.777 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.777232 | orchestrator | 17:24:09.777 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.777259 | orchestrator | 17:24:09.777 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-04 17:24:09.777275 | orchestrator | 17:24:09.777 STDOUT terraform:  } 2025-07-04 17:24:09.777293 | orchestrator | 17:24:09.777 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.777321 | orchestrator | 17:24:09.777 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-04 17:24:09.777327 | orchestrator | 17:24:09.777 STDOUT terraform:  } 2025-07-04 17:24:09.777352 | orchestrator | 17:24:09.777 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.777381 | orchestrator | 17:24:09.777 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-04 17:24:09.777388 | orchestrator | 17:24:09.777 STDOUT terraform:  } 2025-07-04 17:24:09.777409 | orchestrator | 17:24:09.777 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.777437 | orchestrator | 17:24:09.777 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-04 17:24:09.777443 | orchestrator | 17:24:09.777 STDOUT terraform:  } 2025-07-04 17:24:09.777469 | orchestrator | 17:24:09.777 STDOUT terraform:  + binding (known after apply) 2025-07-04 17:24:09.777476 | orchestrator | 17:24:09.777 STDOUT terraform:  + fixed_ip { 2025-07-04 17:24:09.777502 | orchestrator | 17:24:09.777 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-04 17:24:09.777518 | orchestrator | 17:24:09.777 STDOUT terraform:  + su 2025-07-04 17:24:09.777597 | orchestrator | 17:24:09.777 STDOUT terraform: bnet_id = (known after apply) 2025-07-04 17:24:09.777603 | orchestrator | 17:24:09.777 STDOUT terraform:  } 2025-07-04 17:24:09.777630 | orchestrator | 17:24:09.777 STDOUT terraform:  } 2025-07-04 17:24:09.777677 | orchestrator | 17:24:09.777 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-04 17:24:09.777721 | orchestrator | 17:24:09.777 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-04 17:24:09.777759 | orchestrator | 17:24:09.777 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-04 17:24:09.777794 | orchestrator | 17:24:09.777 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-04 17:24:09.777830 | orchestrator | 17:24:09.777 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-04 17:24:09.777863 | orchestrator | 17:24:09.777 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.777900 | orchestrator | 17:24:09.777 STDOUT terraform:  + device_id = (known after apply) 2025-07-04 17:24:09.777936 | orchestrator | 17:24:09.777 STDOUT terraform:  + device_owner = (known after apply) 2025-07-04 17:24:09.777975 | orchestrator | 17:24:09.777 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-04 17:24:09.778042 | orchestrator | 17:24:09.777 STDOUT terraform:  + dns_name = (known after apply) 2025-07-04 17:24:09.778299 | orchestrator | 17:24:09.778 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.778322 | orchestrator | 17:24:09.778 STDOUT terraform:  + mac_address = (known after apply) 2025-07-04 17:24:09.778327 | orchestrator | 17:24:09.778 STDOUT terraform:  + network_id = (known after apply) 2025-07-04 17:24:09.778331 | orchestrator | 17:24:09.778 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-04 17:24:09.778337 | orchestrator | 17:24:09.778 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-04 17:24:09.778362 | orchestrator | 17:24:09.778 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.778395 | orchestrator | 17:24:09.778 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-04 17:24:09.778434 | orchestrator | 17:24:09.778 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.778458 | orchestrator | 17:24:09.778 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.778489 | orchestrator | 17:24:09.778 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-04 17:24:09.778496 | orchestrator | 17:24:09.778 STDOUT terraform:  } 2025-07-04 17:24:09.778517 | orchestrator | 17:24:09.778 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.778548 | orchestrator | 17:24:09.778 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-04 17:24:09.778554 | orchestrator | 17:24:09.778 STDOUT terraform:  } 2025-07-04 17:24:09.778576 | orchestrator | 17:24:09.778 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.778602 | orchestrator | 17:24:09.778 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-04 17:24:09.778647 | orchestrator | 17:24:09.778 STDOUT terraform:  } 2025-07-04 17:24:09.778666 | orchestrator | 17:24:09.778 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.778693 | orchestrator | 17:24:09.778 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-04 17:24:09.778711 | orchestrator | 17:24:09.778 STDOUT terraform:  } 2025-07-04 17:24:09.778734 | orchestrator | 17:24:09.778 STDOUT terraform:  + binding (known after apply) 2025-07-04 17:24:09.778740 | orchestrator | 17:24:09.778 STDOUT terraform:  + fixed_ip { 2025-07-04 17:24:09.778769 | orchestrator | 17:24:09.778 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-04 17:24:09.778797 | orchestrator | 17:24:09.778 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-04 17:24:09.778804 | orchestrator | 17:24:09.778 STDOUT terraform:  } 2025-07-04 17:24:09.778820 | orchestrator | 17:24:09.778 STDOUT terraform:  } 2025-07-04 17:24:09.778865 | orchestrator | 17:24:09.778 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-04 17:24:09.778911 | orchestrator | 17:24:09.778 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-04 17:24:09.778948 | orchestrator | 17:24:09.778 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-04 17:24:09.778992 | orchestrator | 17:24:09.778 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-04 17:24:09.779029 | orchestrator | 17:24:09.778 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-04 17:24:09.779064 | orchestrator | 17:24:09.779 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.779101 | orchestrator | 17:24:09.779 STDOUT terraform:  + device_id = (known after apply) 2025-07-04 17:24:09.779134 | orchestrator | 17:24:09.779 STDOUT terraform:  + device_owner = (known after apply) 2025-07-04 17:24:09.779169 | orchestrator | 17:24:09.779 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-04 17:24:09.779204 | orchestrator | 17:24:09.779 STDOUT terraform:  + dns_name = (known after apply) 2025-07-04 17:24:09.779244 | orchestrator | 17:24:09.779 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.779317 | orchestrator | 17:24:09.779 STDOUT terraform:  + mac_address = (known after apply) 2025-07-04 17:24:09.779348 | orchestrator | 17:24:09.779 STDOUT terraform:  + network_id = (known after apply) 2025-07-04 17:24:09.779383 | orchestrator | 17:24:09.779 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-04 17:24:09.779428 | orchestrator | 17:24:09.779 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-04 17:24:09.779463 | orchestrator | 17:24:09.779 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.779497 | orchestrator | 17:24:09.779 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-04 17:24:09.779535 | orchestrator | 17:24:09.779 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.779554 | orchestrator | 17:24:09.779 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.779582 | orchestrator | 17:24:09.779 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-04 17:24:09.779588 | orchestrator | 17:24:09.779 STDOUT terraform:  } 2025-07-04 17:24:09.779635 | orchestrator | 17:24:09.779 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.779642 | orchestrator | 17:24:09.779 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-04 17:24:09.779657 | orchestrator | 17:24:09.779 STDOUT terraform:  } 2025-07-04 17:24:09.779681 | orchestrator | 17:24:09.779 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.779708 | orchestrator | 17:24:09.779 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-04 17:24:09.779723 | orchestrator | 17:24:09.779 STDOUT terraform:  } 2025-07-04 17:24:09.779746 | orchestrator | 17:24:09.779 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.779774 | orchestrator | 17:24:09.779 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-04 17:24:09.779780 | orchestrator | 17:24:09.779 STDOUT terraform:  } 2025-07-04 17:24:09.779804 | orchestrator | 17:24:09.779 STDOUT terraform:  + binding (known after apply) 2025-07-04 17:24:09.779820 | orchestrator | 17:24:09.779 STDOUT terraform:  + fixed_ip { 2025-07-04 17:24:09.779847 | orchestrator | 17:24:09.779 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-04 17:24:09.779875 | orchestrator | 17:24:09.779 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-04 17:24:09.779881 | orchestrator | 17:24:09.779 STDOUT terraform:  } 2025-07-04 17:24:09.779903 | orchestrator | 17:24:09.779 STDOUT terraform:  } 2025-07-04 17:24:09.779950 | orchestrator | 17:24:09.779 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-04 17:24:09.779993 | orchestrator | 17:24:09.779 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-04 17:24:09.780028 | orchestrator | 17:24:09.779 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-04 17:24:09.780071 | orchestrator | 17:24:09.780 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-04 17:24:09.780105 | orchestrator | 17:24:09.780 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-04 17:24:09.780140 | orchestrator | 17:24:09.780 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.780178 | orchestrator | 17:24:09.780 STDOUT terraform:  + device_id = (known after apply) 2025-07-04 17:24:09.780213 | orchestrator | 17:24:09.780 STDOUT terraform:  + device_owner = (known after apply) 2025-07-04 17:24:09.780247 | orchestrator | 17:24:09.780 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-04 17:24:09.780288 | orchestrator | 17:24:09.780 STDOUT terraform:  + dns_name = (known after apply) 2025-07-04 17:24:09.780326 | orchestrator | 17:24:09.780 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.780361 | orchestrator | 17:24:09.780 STDOUT terraform:  + mac_address = (known after apply) 2025-07-04 17:24:09.780396 | orchestrator | 17:24:09.780 STDOUT terraform:  + network_id = (known after apply) 2025-07-04 17:24:09.780429 | orchestrator | 17:24:09.780 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-04 17:24:09.780465 | orchestrator | 17:24:09.780 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-04 17:24:09.780500 | orchestrator | 17:24:09.780 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.780534 | orchestrator | 17:24:09.780 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-04 17:24:09.780568 | orchestrator | 17:24:09.780 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.780588 | orchestrator | 17:24:09.780 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.780624 | orchestrator | 17:24:09.780 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-04 17:24:09.780645 | orchestrator | 17:24:09.780 STDOUT terraform:  } 2025-07-04 17:24:09.780664 | orchestrator | 17:24:09.780 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.780694 | orchestrator | 17:24:09.780 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-04 17:24:09.780701 | orchestrator | 17:24:09.780 STDOUT terraform:  } 2025-07-04 17:24:09.780721 | orchestrator | 17:24:09.780 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.780749 | orchestrator | 17:24:09.780 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-04 17:24:09.780755 | orchestrator | 17:24:09.780 STDOUT terraform:  } 2025-07-04 17:24:09.780775 | orchestrator | 17:24:09.780 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.780802 | orchestrator | 17:24:09.780 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-04 17:24:09.780818 | orchestrator | 17:24:09.780 STDOUT terraform:  } 2025-07-04 17:24:09.780845 | orchestrator | 17:24:09.780 STDOUT terraform:  + binding (known after apply) 2025-07-04 17:24:09.780866 | orchestrator | 17:24:09.780 STDOUT terraform:  + fixed_ip { 2025-07-04 17:24:09.780889 | orchestrator | 17:24:09.780 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-04 17:24:09.780917 | orchestrator | 17:24:09.780 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-04 17:24:09.780923 | orchestrator | 17:24:09.780 STDOUT terraform:  } 2025-07-04 17:24:09.780938 | orchestrator | 17:24:09.780 STDOUT terraform:  } 2025-07-04 17:24:09.780985 | orchestrator | 17:24:09.780 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-04 17:24:09.781027 | orchestrator | 17:24:09.780 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-04 17:24:09.781061 | orchestrator | 17:24:09.781 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-04 17:24:09.781099 | orchestrator | 17:24:09.781 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-04 17:24:09.781133 | orchestrator | 17:24:09.781 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-04 17:24:09.781173 | orchestrator | 17:24:09.781 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.781207 | orchestrator | 17:24:09.781 STDOUT terraform:  + device_id = (known after apply) 2025-07-04 17:24:09.781248 | orchestrator | 17:24:09.781 STDOUT terraform:  + device_owner = (known after apply) 2025-07-04 17:24:09.781283 | orchestrator | 17:24:09.781 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-04 17:24:09.781323 | orchestrator | 17:24:09.781 STDOUT terraform:  + dns_name = (known after apply) 2025-07-04 17:24:09.781352 | orchestrator | 17:24:09.781 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.781389 | orchestrator | 17:24:09.781 STDOUT terraform:  + mac_address = (known after apply) 2025-07-04 17:24:09.781424 | orchestrator | 17:24:09.781 STDOUT terraform:  + network_id = (known after apply) 2025-07-04 17:24:09.781457 | orchestrator | 17:24:09.781 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-04 17:24:09.781498 | orchestrator | 17:24:09.781 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-04 17:24:09.781541 | orchestrator | 17:24:09.781 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.781573 | orchestrator | 17:24:09.781 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-04 17:24:09.781616 | orchestrator | 17:24:09.781 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.781635 | orchestrator | 17:24:09.781 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.781662 | orchestrator | 17:24:09.781 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-04 17:24:09.781668 | orchestrator | 17:24:09.781 STDOUT terraform:  } 2025-07-04 17:24:09.781689 | orchestrator | 17:24:09.781 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.781716 | orchestrator | 17:24:09.781 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-04 17:24:09.781739 | orchestrator | 17:24:09.781 STDOUT terraform:  } 2025-07-04 17:24:09.781761 | orchestrator | 17:24:09.781 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.781787 | orchestrator | 17:24:09.781 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-04 17:24:09.781805 | orchestrator | 17:24:09.781 STDOUT terraform:  } 2025-07-04 17:24:09.781828 | orchestrator | 17:24:09.781 STDOUT terraform:  + allowed_address_pairs { 2025-07-04 17:24:09.781854 | orchestrator | 17:24:09.781 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-04 17:24:09.781860 | orchestrator | 17:24:09.781 STDOUT terraform:  } 2025-07-04 17:24:09.781885 | orchestrator | 17:24:09.781 STDOUT terraform:  + binding (known after apply) 2025-07-04 17:24:09.781891 | orchestrator | 17:24:09.781 STDOUT terraform:  + fixed_ip { 2025-07-04 17:24:09.781919 | orchestrator | 17:24:09.781 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-04 17:24:09.781950 | orchestrator | 17:24:09.781 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-04 17:24:09.781956 | orchestrator | 17:24:09.781 STDOUT terraform:  } 2025-07-04 17:24:09.781977 | orchestrator | 17:24:09.781 STDOUT terraform:  } 2025-07-04 17:24:09.782238 | orchestrator | 17:24:09.781 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-04 17:24:09.782249 | orchestrator | 17:24:09.782 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-04 17:24:09.782253 | orchestrator | 17:24:09.782 STDOUT terraform:  + force_destroy = false 2025-07-04 17:24:09.782257 | orchestrator | 17:24:09.782 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.782266 | orchestrator | 17:24:09.782 STDOUT terraform:  + port_id = (known after apply) 2025-07-04 17:24:09.782270 | orchestrator | 17:24:09.782 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.782273 | orchestrator | 17:24:09.782 STDOUT terraform:  + router_id = (known after apply) 2025-07-04 17:24:09.782277 | orchestrator | 17:24:09.782 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-04 17:24:09.782283 | orchestrator | 17:24:09.782 STDOUT terraform:  } 2025-07-04 17:24:09.782287 | orchestrator | 17:24:09.782 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-04 17:24:09.782311 | orchestrator | 17:24:09.782 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-04 17:24:09.782352 | orchestrator | 17:24:09.782 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-04 17:24:09.782392 | orchestrator | 17:24:09.782 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.782416 | orchestrator | 17:24:09.782 STDOUT terraform:  + availability_zone_hints = [ 2025-07-04 17:24:09.782423 | orchestrator | 17:24:09.782 STDOUT terraform:  + "nova", 2025-07-04 17:24:09.782443 | orchestrator | 17:24:09.782 STDOUT terraform:  ] 2025-07-04 17:24:09.782636 | orchestrator | 17:24:09.782 STDOUT terraform:  + distributed = (known after apply) 2025-07-04 17:24:09.782646 | orchestrator | 17:24:09.782 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-04 17:24:09.782658 | orchestrator | 17:24:09.782 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-04 17:24:09.782668 | orchestrator | 17:24:09.782 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-04 17:24:09.782674 | orchestrator | 17:24:09.782 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.782680 | orchestrator | 17:24:09.782 STDOUT terraform:  + name = "testbed" 2025-07-04 17:24:09.782718 | orchestrator | 17:24:09.782 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.782752 | orchestrator | 17:24:09.782 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.782780 | orchestrator | 17:24:09.782 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-04 17:24:09.782786 | orchestrator | 17:24:09.782 STDOUT terraform:  } 2025-07-04 17:24:09.782839 | orchestrator | 17:24:09.782 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-04 17:24:09.782889 | orchestrator | 17:24:09.782 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-04 17:24:09.782913 | orchestrator | 17:24:09.782 STDOUT terraform:  + description = "ssh" 2025-07-04 17:24:09.782941 | orchestrator | 17:24:09.782 STDOUT terraform:  + direction = "ingress" 2025-07-04 17:24:09.782967 | orchestrator | 17:24:09.782 STDOUT terraform:  + ethertype = "IPv4" 2025-07-04 17:24:09.783002 | orchestrator | 17:24:09.782 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.783030 | orchestrator | 17:24:09.782 STDOUT terraform:  + port_range_max = 22 2025-07-04 17:24:09.783055 | orchestrator | 17:24:09.783 STDOUT terraform:  + port_range_min = 22 2025-07-04 17:24:09.783078 | orchestrator | 17:24:09.783 STDOUT terraform:  + protocol = "tcp" 2025-07-04 17:24:09.783113 | orchestrator | 17:24:09.783 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.783146 | orchestrator | 17:24:09.783 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-04 17:24:09.783181 | orchestrator | 17:24:09.783 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-04 17:24:09.783247 | orchestrator | 17:24:09.783 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-04 17:24:09.783253 | orchestrator | 17:24:09.783 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-04 17:24:09.783282 | orchestrator | 17:24:09.783 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.783297 | orchestrator | 17:24:09.783 STDOUT terraform:  } 2025-07-04 17:24:09.783357 | orchestrator | 17:24:09.783 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-04 17:24:09.783413 | orchestrator | 17:24:09.783 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-04 17:24:09.783441 | orchestrator | 17:24:09.783 STDOUT terraform:  + description = "wireguard" 2025-07-04 17:24:09.783468 | orchestrator | 17:24:09.783 STDOUT terraform:  + direction = "ingress" 2025-07-04 17:24:09.783491 | orchestrator | 17:24:09.783 STDOUT terraform:  + ethertype = "IPv4" 2025-07-04 17:24:09.783527 | orchestrator | 17:24:09.783 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.783553 | orchestrator | 17:24:09.783 STDOUT terraform:  + port_range_max = 51820 2025-07-04 17:24:09.783576 | orchestrator | 17:24:09.783 STDOUT terraform:  + port_range_min = 51820 2025-07-04 17:24:09.783599 | orchestrator | 17:24:09.783 STDOUT terraform:  + protocol = "udp" 2025-07-04 17:24:09.783650 | orchestrator | 17:24:09.783 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.783683 | orchestrator | 17:24:09.783 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-04 17:24:09.783722 | orchestrator | 17:24:09.783 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-04 17:24:09.783753 | orchestrator | 17:24:09.783 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-04 17:24:09.783794 | orchestrator | 17:24:09.783 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-04 17:24:09.783825 | orchestrator | 17:24:09.783 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.783831 | orchestrator | 17:24:09.783 STDOUT terraform:  } 2025-07-04 17:24:09.783887 | orchestrator | 17:24:09.783 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-04 17:24:09.783939 | orchestrator | 17:24:09.783 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-04 17:24:09.783967 | orchestrator | 17:24:09.783 STDOUT terraform:  + direction = "ingress" 2025-07-04 17:24:09.783990 | orchestrator | 17:24:09.783 STDOUT terraform:  + ethertype = "IPv4" 2025-07-04 17:24:09.784025 | orchestrator | 17:24:09.783 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.784054 | orchestrator | 17:24:09.784 STDOUT terraform:  + protocol = "tcp" 2025-07-04 17:24:09.784094 | orchestrator | 17:24:09.784 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.784127 | orchestrator | 17:24:09.784 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-04 17:24:09.784161 | orchestrator | 17:24:09.784 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-04 17:24:09.784195 | orchestrator | 17:24:09.784 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-04 17:24:09.784239 | orchestrator | 17:24:09.784 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-04 17:24:09.784274 | orchestrator | 17:24:09.784 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.784280 | orchestrator | 17:24:09.784 STDOUT terraform:  } 2025-07-04 17:24:09.784337 | orchestrator | 17:24:09.784 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-04 17:24:09.784392 | orchestrator | 17:24:09.784 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-04 17:24:09.784421 | orchestrator | 17:24:09.784 STDOUT terraform:  + direction = "ingress" 2025-07-04 17:24:09.784450 | orchestrator | 17:24:09.784 STDOUT terraform:  + ethertype = "IPv4" 2025-07-04 17:24:09.784486 | orchestrator | 17:24:09.784 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.784509 | orchestrator | 17:24:09.784 STDOUT terraform:  + protocol = "udp" 2025-07-04 17:24:09.784547 | orchestrator | 17:24:09.784 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.784580 | orchestrator | 17:24:09.784 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-04 17:24:09.784623 | orchestrator | 17:24:09.784 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-04 17:24:09.784656 | orchestrator | 17:24:09.784 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-04 17:24:09.784695 | orchestrator | 17:24:09.784 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-04 17:24:09.784730 | orchestrator | 17:24:09.784 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.784736 | orchestrator | 17:24:09.784 STDOUT terraform:  } 2025-07-04 17:24:09.784793 | orchestrator | 17:24:09.784 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-04 17:24:09.784843 | orchestrator | 17:24:09.784 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-04 17:24:09.784874 | orchestrator | 17:24:09.784 STDOUT terraform:  + direction = "ingress" 2025-07-04 17:24:09.784900 | orchestrator | 17:24:09.784 STDOUT terraform:  + ethertype = "IPv4" 2025-07-04 17:24:09.784935 | orchestrator | 17:24:09.784 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.784958 | orchestrator | 17:24:09.784 STDOUT terraform:  + protocol = "icmp" 2025-07-04 17:24:09.784997 | orchestrator | 17:24:09.784 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.785031 | orchestrator | 17:24:09.784 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-04 17:24:09.785065 | orchestrator | 17:24:09.785 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-04 17:24:09.785093 | orchestrator | 17:24:09.785 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-04 17:24:09.785144 | orchestrator | 17:24:09.785 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-04 17:24:09.785172 | orchestrator | 17:24:09.785 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.785178 | orchestrator | 17:24:09.785 STDOUT terraform:  } 2025-07-04 17:24:09.785231 | orchestrator | 17:24:09.785 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-04 17:24:09.785278 | orchestrator | 17:24:09.785 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-04 17:24:09.785312 | orchestrator | 17:24:09.785 STDOUT terraform:  + direction = "ingress" 2025-07-04 17:24:09.785335 | orchestrator | 17:24:09.785 STDOUT terraform:  + ethertype = "IPv4" 2025-07-04 17:24:09.785371 | orchestrator | 17:24:09.785 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.785395 | orchestrator | 17:24:09.785 STDOUT terraform:  + protocol = "tcp" 2025-07-04 17:24:09.785430 | orchestrator | 17:24:09.785 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.785466 | orchestrator | 17:24:09.785 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-04 17:24:09.785506 | orchestrator | 17:24:09.785 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-04 17:24:09.785536 | orchestrator | 17:24:09.785 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-04 17:24:09.785571 | orchestrator | 17:24:09.785 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-04 17:24:09.785617 | orchestrator | 17:24:09.785 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.785644 | orchestrator | 17:24:09.785 STDOUT terraform:  } 2025-07-04 17:24:09.785692 | orchestrator | 17:24:09.785 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-04 17:24:09.785743 | orchestrator | 17:24:09.785 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-04 17:24:09.785771 | orchestrator | 17:24:09.785 STDOUT terraform:  + direction = "ingress" 2025-07-04 17:24:09.785794 | orchestrator | 17:24:09.785 STDOUT terraform:  + ethertype = "IPv4" 2025-07-04 17:24:09.785829 | orchestrator | 17:24:09.785 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.785853 | orchestrator | 17:24:09.785 STDOUT terraform:  + protocol = "udp" 2025-07-04 17:24:09.785899 | orchestrator | 17:24:09.785 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.785934 | orchestrator | 17:24:09.785 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-04 17:24:09.785968 | orchestrator | 17:24:09.785 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-04 17:24:09.785996 | orchestrator | 17:24:09.785 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-04 17:24:09.786044 | orchestrator | 17:24:09.785 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-04 17:24:09.786078 | orchestrator | 17:24:09.786 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.786085 | orchestrator | 17:24:09.786 STDOUT terraform:  } 2025-07-04 17:24:09.786134 | orchestrator | 17:24:09.786 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-04 17:24:09.786202 | orchestrator | 17:24:09.786 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-04 17:24:09.786242 | orchestrator | 17:24:09.786 STDOUT terraform:  + direction = "ingress" 2025-07-04 17:24:09.786269 | orchestrator | 17:24:09.786 STDOUT terraform:  + ethertype = "IPv4" 2025-07-04 17:24:09.786312 | orchestrator | 17:24:09.786 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.786338 | orchestrator | 17:24:09.786 STDOUT terraform:  + protocol = "icmp" 2025-07-04 17:24:09.786374 | orchestrator | 17:24:09.786 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.786408 | orchestrator | 17:24:09.786 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-04 17:24:09.786445 | orchestrator | 17:24:09.786 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-04 17:24:09.786472 | orchestrator | 17:24:09.786 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-04 17:24:09.786509 | orchestrator | 17:24:09.786 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-04 17:24:09.786544 | orchestrator | 17:24:09.786 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.786550 | orchestrator | 17:24:09.786 STDOUT terraform:  } 2025-07-04 17:24:09.786603 | orchestrator | 17:24:09.786 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-04 17:24:09.786665 | orchestrator | 17:24:09.786 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-04 17:24:09.786688 | orchestrator | 17:24:09.786 STDOUT terraform:  + description = "vrrp" 2025-07-04 17:24:09.786718 | orchestrator | 17:24:09.786 STDOUT terraform:  + direction = "ingress" 2025-07-04 17:24:09.786741 | orchestrator | 17:24:09.786 STDOUT terraform:  + ethertype = "IPv4" 2025-07-04 17:24:09.786780 | orchestrator | 17:24:09.786 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.786804 | orchestrator | 17:24:09.786 STDOUT terraform:  + protocol = "112" 2025-07-04 17:24:09.786838 | orchestrator | 17:24:09.786 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.786871 | orchestrator | 17:24:09.786 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-04 17:24:09.786906 | orchestrator | 17:24:09.786 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-04 17:24:09.786933 | orchestrator | 17:24:09.786 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-04 17:24:09.786975 | orchestrator | 17:24:09.786 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-04 17:24:09.787009 | orchestrator | 17:24:09.786 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.787016 | orchestrator | 17:24:09.787 STDOUT terraform:  } 2025-07-04 17:24:09.787066 | orchestrator | 17:24:09.787 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-04 17:24:09.787115 | orchestrator | 17:24:09.787 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-04 17:24:09.787142 | orchestrator | 17:24:09.787 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.787174 | orchestrator | 17:24:09.787 STDOUT terraform:  + description = "management security group" 2025-07-04 17:24:09.787201 | orchestrator | 17:24:09.787 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.787232 | orchestrator | 17:24:09.787 STDOUT terraform:  + name = "testbed-management" 2025-07-04 17:24:09.787262 | orchestrator | 17:24:09.787 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.787294 | orchestrator | 17:24:09.787 STDOUT terraform:  + stateful = (known after apply) 2025-07-04 17:24:09.787326 | orchestrator | 17:24:09.787 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.787332 | orchestrator | 17:24:09.787 STDOUT terraform:  } 2025-07-04 17:24:09.787378 | orchestrator | 17:24:09.787 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-04 17:24:09.787423 | orchestrator | 17:24:09.787 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-04 17:24:09.787447 | orchestrator | 17:24:09.787 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.787476 | orchestrator | 17:24:09.787 STDOUT terraform:  + description = "node security group" 2025-07-04 17:24:09.787508 | orchestrator | 17:24:09.787 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.787532 | orchestrator | 17:24:09.787 STDOUT terraform:  + name = "testbed-node" 2025-07-04 17:24:09.787559 | orchestrator | 17:24:09.787 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.787585 | orchestrator | 17:24:09.787 STDOUT terraform:  + stateful = (known after apply) 2025-07-04 17:24:09.787626 | orchestrator | 17:24:09.787 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.787632 | orchestrator | 17:24:09.787 STDOUT terraform:  } 2025-07-04 17:24:09.787679 | orchestrator | 17:24:09.787 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-04 17:24:09.787724 | orchestrator | 17:24:09.787 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-04 17:24:09.787753 | orchestrator | 17:24:09.787 STDOUT terraform:  + all_tags = (known after apply) 2025-07-04 17:24:09.787782 | orchestrator | 17:24:09.787 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-04 17:24:09.787800 | orchestrator | 17:24:09.787 STDOUT terraform:  + dns_nameservers = [ 2025-07-04 17:24:09.787816 | orchestrator | 17:24:09.787 STDOUT terraform:  + "8.8.8.8", 2025-07-04 17:24:09.787826 | orchestrator | 17:24:09.787 STDOUT terraform:  + "9.9.9.9", 2025-07-04 17:24:09.787843 | orchestrator | 17:24:09.787 STDOUT terraform:  ] 2025-07-04 17:24:09.787863 | orchestrator | 17:24:09.787 STDOUT terraform:  + enable_dhcp = true 2025-07-04 17:24:09.787898 | orchestrator | 17:24:09.787 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-04 17:24:09.787936 | orchestrator | 17:24:09.787 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.787954 | orchestrator | 17:24:09.787 STDOUT terraform:  + ip_version = 4 2025-07-04 17:24:09.787983 | orchestrator | 17:24:09.787 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-04 17:24:09.788018 | orchestrator | 17:24:09.787 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-04 17:24:09.788057 | orchestrator | 17:24:09.788 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-04 17:24:09.788085 | orchestrator | 17:24:09.788 STDOUT terraform:  + network_id = (known after apply) 2025-07-04 17:24:09.788105 | orchestrator | 17:24:09.788 STDOUT terraform:  + no_gateway = false 2025-07-04 17:24:09.788136 | orchestrator | 17:24:09.788 STDOUT terraform:  + region = (known after apply) 2025-07-04 17:24:09.788164 | orchestrator | 17:24:09.788 STDOUT terraform:  + service_types = (known after apply) 2025-07-04 17:24:09.788201 | orchestrator | 17:24:09.788 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-04 17:24:09.788218 | orchestrator | 17:24:09.788 STDOUT terraform:  + allocation_pool { 2025-07-04 17:24:09.788241 | orchestrator | 17:24:09.788 STDOUT terraform:  + end = "192.168.31.250" 2025-07-04 17:24:09.788266 | orchestrator | 17:24:09.788 STDOUT terraform:  + start = "192.168.31.200" 2025-07-04 17:24:09.788272 | orchestrator | 17:24:09.788 STDOUT terraform:  } 2025-07-04 17:24:09.788288 | orchestrator | 17:24:09.788 STDOUT terraform:  } 2025-07-04 17:24:09.788310 | orchestrator | 17:24:09.788 STDOUT terraform:  # terraform_data.image will be created 2025-07-04 17:24:09.788332 | orchestrator | 17:24:09.788 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-04 17:24:09.788356 | orchestrator | 17:24:09.788 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.788374 | orchestrator | 17:24:09.788 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-04 17:24:09.788399 | orchestrator | 17:24:09.788 STDOUT terraform:  + output = (known after apply) 2025-07-04 17:24:09.788419 | orchestrator | 17:24:09.788 STDOUT terraform:  } 2025-07-04 17:24:09.788446 | orchestrator | 17:24:09.788 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-04 17:24:09.788473 | orchestrator | 17:24:09.788 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-04 17:24:09.788495 | orchestrator | 17:24:09.788 STDOUT terraform:  + id = (known after apply) 2025-07-04 17:24:09.788516 | orchestrator | 17:24:09.788 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-04 17:24:09.788538 | orchestrator | 17:24:09.788 STDOUT terraform:  + output = (known after apply) 2025-07-04 17:24:09.788544 | orchestrator | 17:24:09.788 STDOUT terraform:  } 2025-07-04 17:24:09.788576 | orchestrator | 17:24:09.788 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-04 17:24:09.788585 | orchestrator | 17:24:09.788 STDOUT terraform: Changes to Outputs: 2025-07-04 17:24:09.788625 | orchestrator | 17:24:09.788 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-04 17:24:09.788646 | orchestrator | 17:24:09.788 STDOUT terraform:  + private_key = (sensitive value) 2025-07-04 17:24:09.862526 | orchestrator | 17:24:09.862 STDOUT terraform: terraform_data.image: Creating... 2025-07-04 17:24:09.862693 | orchestrator | 17:24:09.862 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=3b3f60f2-4632-4357-816c-7fd3e8c1198e] 2025-07-04 17:24:09.982188 | orchestrator | 17:24:09.981 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-04 17:24:09.982291 | orchestrator | 17:24:09.981 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=7e6f745b-010f-3e84-bb35-e89442defaf0] 2025-07-04 17:24:10.001381 | orchestrator | 17:24:10.001 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-04 17:24:10.003810 | orchestrator | 17:24:10.003 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-04 17:24:10.010882 | orchestrator | 17:24:10.010 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-04 17:24:10.011566 | orchestrator | 17:24:10.011 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-04 17:24:10.012530 | orchestrator | 17:24:10.012 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-04 17:24:10.013798 | orchestrator | 17:24:10.013 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-04 17:24:10.014668 | orchestrator | 17:24:10.014 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-04 17:24:10.018528 | orchestrator | 17:24:10.018 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-04 17:24:10.021350 | orchestrator | 17:24:10.021 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-04 17:24:10.030053 | orchestrator | 17:24:10.029 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-04 17:24:10.442749 | orchestrator | 17:24:10.439 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-04 17:24:10.444912 | orchestrator | 17:24:10.444 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-04 17:24:10.450058 | orchestrator | 17:24:10.449 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-04 17:24:10.454604 | orchestrator | 17:24:10.453 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-04 17:24:10.522340 | orchestrator | 17:24:10.519 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-07-04 17:24:10.536022 | orchestrator | 17:24:10.535 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-04 17:24:15.964584 | orchestrator | 17:24:15.963 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=0cd31550-7230-47e7-a5de-8835c61d829d] 2025-07-04 17:24:15.977039 | orchestrator | 17:24:15.976 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-04 17:24:20.014783 | orchestrator | 17:24:20.014 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-07-04 17:24:20.016912 | orchestrator | 17:24:20.016 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-07-04 17:24:20.018946 | orchestrator | 17:24:20.018 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-07-04 17:24:20.021115 | orchestrator | 17:24:20.020 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-07-04 17:24:20.022185 | orchestrator | 17:24:20.022 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-07-04 17:24:20.030453 | orchestrator | 17:24:20.030 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-07-04 17:24:20.451208 | orchestrator | 17:24:20.450 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-07-04 17:24:20.455253 | orchestrator | 17:24:20.455 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-07-04 17:24:20.533733 | orchestrator | 17:24:20.533 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-07-04 17:24:20.589647 | orchestrator | 17:24:20.589 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=36831ba3-00a3-40d1-8c8d-d5688ce5b92e] 2025-07-04 17:24:20.592399 | orchestrator | 17:24:20.590 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=f1ee158f-8183-4691-b988-cdb0b3746d63] 2025-07-04 17:24:20.597585 | orchestrator | 17:24:20.597 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-04 17:24:20.601177 | orchestrator | 17:24:20.601 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-04 17:24:20.610897 | orchestrator | 17:24:20.610 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=cc10544f-afe1-4b17-ac35-d479dbd44023] 2025-07-04 17:24:20.616769 | orchestrator | 17:24:20.616 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-04 17:24:20.622558 | orchestrator | 17:24:20.622 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=c678ea0e-f232-4db4-9458-94e4077f665f] 2025-07-04 17:24:20.628701 | orchestrator | 17:24:20.628 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-04 17:24:20.632800 | orchestrator | 17:24:20.632 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=cc9ae976-88cb-4b21-9449-d8985ff12d4f] 2025-07-04 17:24:20.641705 | orchestrator | 17:24:20.641 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-04 17:24:20.674103 | orchestrator | 17:24:20.673 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=9dcda133-58d2-4853-8afe-c4a876875c80] 2025-07-04 17:24:20.674766 | orchestrator | 17:24:20.674 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=d957e37b-6f48-487c-9682-d56dbc604f5a] 2025-07-04 17:24:20.675764 | orchestrator | 17:24:20.675 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=22af1316-5bc1-4af9-ac7a-65db3b57cabb] 2025-07-04 17:24:20.695687 | orchestrator | 17:24:20.695 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-04 17:24:20.703980 | orchestrator | 17:24:20.703 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-04 17:24:20.710092 | orchestrator | 17:24:20.709 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=225b0f5f0f9160f08fe266517782a486d163982a] 2025-07-04 17:24:20.715657 | orchestrator | 17:24:20.713 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-04 17:24:20.715817 | orchestrator | 17:24:20.715 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-04 17:24:20.718327 | orchestrator | 17:24:20.718 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=fe0206263a8ae4406ba7a3a85b69373ed569bae9] 2025-07-04 17:24:20.721307 | orchestrator | 17:24:20.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=f2e9dc75-50de-4afc-bb89-e69d1400c858] 2025-07-04 17:24:25.978597 | orchestrator | 17:24:25.978 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-07-04 17:24:26.300157 | orchestrator | 17:24:26.299 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=73bea867-1ba5-4f47-b639-11ca888d72cd] 2025-07-04 17:24:27.272092 | orchestrator | 17:24:27.271 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=4a5aeb51-073d-4433-9358-b40b1a9a5d01] 2025-07-04 17:24:27.284928 | orchestrator | 17:24:27.284 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-04 17:24:30.598852 | orchestrator | 17:24:30.598 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-07-04 17:24:30.603145 | orchestrator | 17:24:30.602 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-07-04 17:24:30.618368 | orchestrator | 17:24:30.618 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-07-04 17:24:30.630742 | orchestrator | 17:24:30.630 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-07-04 17:24:30.643146 | orchestrator | 17:24:30.642 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-07-04 17:24:30.717838 | orchestrator | 17:24:30.717 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-07-04 17:24:30.944229 | orchestrator | 17:24:30.943 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=e5fbf5c6-81a8-4539-96cc-19329771a958] 2025-07-04 17:24:30.974011 | orchestrator | 17:24:30.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=ab42ac05-5a2a-4b10-b0be-14fcaa2726cd] 2025-07-04 17:24:31.019570 | orchestrator | 17:24:31.019 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=acc40fa0-2709-4f7e-bb91-7c7e8e422ea3] 2025-07-04 17:24:31.040161 | orchestrator | 17:24:31.039 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=654ae738-db23-4503-810d-da49c3934f2e] 2025-07-04 17:24:31.114947 | orchestrator | 17:24:31.114 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=fb812322-6028-4c88-9c9a-0c04dc1dfbca] 2025-07-04 17:24:31.164273 | orchestrator | 17:24:31.163 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=eaff6497-f877-4154-a511-2c6d9abffd21] 2025-07-04 17:24:35.573361 | orchestrator | 17:24:35.572 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 9s [id=51546d6c-85ce-4f28-a969-de894c9ed774] 2025-07-04 17:24:35.580356 | orchestrator | 17:24:35.579 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-04 17:24:35.580437 | orchestrator | 17:24:35.579 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-04 17:24:35.580448 | orchestrator | 17:24:35.579 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-04 17:24:35.771400 | orchestrator | 17:24:35.771 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=38f74d30-1934-4234-a62d-f11b80b58433] 2025-07-04 17:24:35.783664 | orchestrator | 17:24:35.783 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-04 17:24:35.787038 | orchestrator | 17:24:35.786 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-04 17:24:35.789724 | orchestrator | 17:24:35.789 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-04 17:24:35.789867 | orchestrator | 17:24:35.789 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-04 17:24:35.792232 | orchestrator | 17:24:35.792 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-04 17:24:35.792726 | orchestrator | 17:24:35.792 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-04 17:24:35.795954 | orchestrator | 17:24:35.795 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=997b1c0d-6322-47da-8c36-14a23a6a41f1] 2025-07-04 17:24:35.801643 | orchestrator | 17:24:35.801 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-04 17:24:35.801863 | orchestrator | 17:24:35.801 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-04 17:24:35.805348 | orchestrator | 17:24:35.805 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-04 17:24:36.007664 | orchestrator | 17:24:36.007 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=c11b05c0-b102-4d70-8689-146a1eb09e60] 2025-07-04 17:24:36.008517 | orchestrator | 17:24:36.007 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=fdde4e7a-ac36-4a42-b079-99dcfc9670cf] 2025-07-04 17:24:36.014764 | orchestrator | 17:24:36.014 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-04 17:24:36.024387 | orchestrator | 17:24:36.024 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-04 17:24:36.213087 | orchestrator | 17:24:36.212 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=dba8eef3-13b1-45b3-b2e0-a665ed268c8d] 2025-07-04 17:24:36.227385 | orchestrator | 17:24:36.227 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-04 17:24:36.349611 | orchestrator | 17:24:36.344 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=5efaae63-2d54-41e9-bef5-6cdb0911987a] 2025-07-04 17:24:36.376606 | orchestrator | 17:24:36.376 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-04 17:24:36.422801 | orchestrator | 17:24:36.422 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=6e4804ce-e098-4779-854f-d04a9c5a595c] 2025-07-04 17:24:36.438689 | orchestrator | 17:24:36.438 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-04 17:24:36.743402 | orchestrator | 17:24:36.742 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=629030dc-8d6e-41a8-9472-ce2f72251c09] 2025-07-04 17:24:36.759862 | orchestrator | 17:24:36.759 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-04 17:24:36.824224 | orchestrator | 17:24:36.823 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=c2853df5-253b-461e-9cdc-56506e1a668c] 2025-07-04 17:24:36.840859 | orchestrator | 17:24:36.840 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-04 17:24:37.213947 | orchestrator | 17:24:37.213 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=9dec39a1-e24e-4816-99ec-4c91a720aa5c] 2025-07-04 17:24:37.824449 | orchestrator | 17:24:37.824 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=18319cd4-7ab3-46ff-9bfb-2ffcc28c973c] 2025-07-04 17:24:41.611019 | orchestrator | 17:24:41.610 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=05f380af-94fa-4e6a-87a3-81808ed32199] 2025-07-04 17:24:41.723306 | orchestrator | 17:24:41.722 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=cf0213a8-ce18-4f69-b725-9970be9f78d5] 2025-07-04 17:24:41.926770 | orchestrator | 17:24:41.926 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=88dfcb38-f4a5-4952-b23a-0f543de5037d] 2025-07-04 17:24:42.391431 | orchestrator | 17:24:42.391 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=2c840f6d-edad-47cc-b989-918b13186518] 2025-07-04 17:24:42.403917 | orchestrator | 17:24:42.403 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=9fd8a993-06a0-44ea-b014-4fa07b7b6916] 2025-07-04 17:24:42.514811 | orchestrator | 17:24:42.514 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=54db80fc-347f-4012-bb8a-5e8bbd12407a] 2025-07-04 17:24:43.264307 | orchestrator | 17:24:43.263 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 7s [id=01c30035-3d5f-47f0-acc5-ee5151432b5c] 2025-07-04 17:24:44.088253 | orchestrator | 17:24:44.087 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=66c8ed80-800a-423a-81f7-97c3cdbe1b43] 2025-07-04 17:24:44.116517 | orchestrator | 17:24:44.114 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-04 17:24:44.123399 | orchestrator | 17:24:44.123 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-04 17:24:44.129424 | orchestrator | 17:24:44.129 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-04 17:24:44.132176 | orchestrator | 17:24:44.132 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-04 17:24:44.136750 | orchestrator | 17:24:44.136 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-04 17:24:44.137168 | orchestrator | 17:24:44.137 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-04 17:24:44.143196 | orchestrator | 17:24:44.143 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-04 17:24:51.245443 | orchestrator | 17:24:51.244 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=696a61c6-885f-4fdb-9bbe-6926e7735c48] 2025-07-04 17:24:51.270290 | orchestrator | 17:24:51.270 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-04 17:24:51.270374 | orchestrator | 17:24:51.270 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-04 17:24:51.270594 | orchestrator | 17:24:51.270 STDOUT terraform: local_file.inventory: Creating... 2025-07-04 17:24:51.277919 | orchestrator | 17:24:51.277 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=49a8901c46ac7ee43639b78c5b77cb71c85619bf] 2025-07-04 17:24:51.278833 | orchestrator | 17:24:51.278 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=8b2dca0bb2f0ab5f325409bfa9699f7e9f7cfbdd] 2025-07-04 17:24:52.137680 | orchestrator | 17:24:52.137 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=696a61c6-885f-4fdb-9bbe-6926e7735c48] 2025-07-04 17:24:54.126098 | orchestrator | 17:24:54.125 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-04 17:24:54.135342 | orchestrator | 17:24:54.135 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-04 17:24:54.135438 | orchestrator | 17:24:54.135 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-04 17:24:54.141869 | orchestrator | 17:24:54.141 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-04 17:24:54.149376 | orchestrator | 17:24:54.149 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-04 17:24:54.149512 | orchestrator | 17:24:54.149 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-04 17:25:04.127359 | orchestrator | 17:25:04.127 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-04 17:25:04.135622 | orchestrator | 17:25:04.135 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-04 17:25:04.135852 | orchestrator | 17:25:04.135 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-04 17:25:04.143178 | orchestrator | 17:25:04.142 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-04 17:25:04.149866 | orchestrator | 17:25:04.149 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-04 17:25:04.150125 | orchestrator | 17:25:04.149 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-04 17:25:04.603955 | orchestrator | 17:25:04.603 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=7789d9dc-297f-4f0a-8e93-b2640dd761f6] 2025-07-04 17:25:04.629608 | orchestrator | 17:25:04.629 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=dbf6ec64-ab89-4eee-813f-67a4d9304cc2] 2025-07-04 17:25:04.751434 | orchestrator | 17:25:04.745 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=d614e9e7-cd7a-40bf-94be-52526c3fe1bc] 2025-07-04 17:25:14.136293 | orchestrator | 17:25:14.135 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-04 17:25:14.143752 | orchestrator | 17:25:14.143 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-07-04 17:25:14.150983 | orchestrator | 17:25:14.150 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-07-04 17:25:14.977280 | orchestrator | 17:25:14.976 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=113da406-d430-45c6-a814-2c1fa3bfc9e9] 2025-07-04 17:25:15.241580 | orchestrator | 17:25:15.241 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=99a4a4fa-da33-4d18-a031-779c5a40087e] 2025-07-04 17:25:15.261773 | orchestrator | 17:25:15.261 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=27bf7e71-c481-4bbe-91bf-42ffa7204d7e] 2025-07-04 17:25:15.272738 | orchestrator | 17:25:15.272 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-04 17:25:15.286137 | orchestrator | 17:25:15.285 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=5852103242216980367] 2025-07-04 17:25:15.290273 | orchestrator | 17:25:15.290 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-04 17:25:15.305033 | orchestrator | 17:25:15.304 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-04 17:25:15.310166 | orchestrator | 17:25:15.309 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-04 17:25:15.319255 | orchestrator | 17:25:15.319 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-04 17:25:15.323285 | orchestrator | 17:25:15.323 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-04 17:25:15.329435 | orchestrator | 17:25:15.329 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-04 17:25:15.330943 | orchestrator | 17:25:15.330 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-04 17:25:15.335172 | orchestrator | 17:25:15.335 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-04 17:25:15.335229 | orchestrator | 17:25:15.335 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-04 17:25:15.335239 | orchestrator | 17:25:15.335 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-04 17:25:20.652884 | orchestrator | 17:25:20.652 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=7789d9dc-297f-4f0a-8e93-b2640dd761f6/36831ba3-00a3-40d1-8c8d-d5688ce5b92e] 2025-07-04 17:25:20.667347 | orchestrator | 17:25:20.666 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=27bf7e71-c481-4bbe-91bf-42ffa7204d7e/9dcda133-58d2-4853-8afe-c4a876875c80] 2025-07-04 17:25:20.685972 | orchestrator | 17:25:20.685 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=dbf6ec64-ab89-4eee-813f-67a4d9304cc2/c678ea0e-f232-4db4-9458-94e4077f665f] 2025-07-04 17:25:20.715230 | orchestrator | 17:25:20.714 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=7789d9dc-297f-4f0a-8e93-b2640dd761f6/d957e37b-6f48-487c-9682-d56dbc604f5a] 2025-07-04 17:25:20.740694 | orchestrator | 17:25:20.740 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=27bf7e71-c481-4bbe-91bf-42ffa7204d7e/f2e9dc75-50de-4afc-bb89-e69d1400c858] 2025-07-04 17:25:20.772765 | orchestrator | 17:25:20.772 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=7789d9dc-297f-4f0a-8e93-b2640dd761f6/cc9ae976-88cb-4b21-9449-d8985ff12d4f] 2025-07-04 17:25:20.807340 | orchestrator | 17:25:20.806 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=dbf6ec64-ab89-4eee-813f-67a4d9304cc2/cc10544f-afe1-4b17-ac35-d479dbd44023] 2025-07-04 17:25:23.864455 | orchestrator | 17:25:23.863 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=27bf7e71-c481-4bbe-91bf-42ffa7204d7e/22af1316-5bc1-4af9-ac7a-65db3b57cabb] 2025-07-04 17:25:23.894208 | orchestrator | 17:25:23.893 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 9s [id=dbf6ec64-ab89-4eee-813f-67a4d9304cc2/f1ee158f-8183-4691-b988-cdb0b3746d63] 2025-07-04 17:25:25.324579 | orchestrator | 17:25:25.324 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-04 17:25:35.325815 | orchestrator | 17:25:35.325 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-04 17:25:35.663697 | orchestrator | 17:25:35.663 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=b2843688-20b7-4a6c-98a2-8ab515181b6e] 2025-07-04 17:25:35.693407 | orchestrator | 17:25:35.693 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-04 17:25:35.693471 | orchestrator | 17:25:35.693 STDOUT terraform: Outputs: 2025-07-04 17:25:35.693506 | orchestrator | 17:25:35.693 STDOUT terraform: manager_address = 2025-07-04 17:25:35.693514 | orchestrator | 17:25:35.693 STDOUT terraform: private_key = 2025-07-04 17:25:36.197140 | orchestrator | ok: Runtime: 0:01:36.839077 2025-07-04 17:25:36.242707 | 2025-07-04 17:25:36.242984 | TASK [Fetch manager address] 2025-07-04 17:25:36.762161 | orchestrator | ok 2025-07-04 17:25:36.773630 | 2025-07-04 17:25:36.773778 | TASK [Set manager_host address] 2025-07-04 17:25:36.854440 | orchestrator | ok 2025-07-04 17:25:36.864008 | 2025-07-04 17:25:36.864136 | LOOP [Update ansible collections] 2025-07-04 17:26:04.275244 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-04 17:26:04.275647 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-04 17:26:04.275714 | orchestrator | Starting galaxy collection install process 2025-07-04 17:26:04.275756 | orchestrator | Process install dependency map 2025-07-04 17:26:04.275795 | orchestrator | Starting collection install process 2025-07-04 17:26:04.275907 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-07-04 17:26:04.275954 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-07-04 17:26:04.275998 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-04 17:26:04.276093 | orchestrator | ok: Item: commons Runtime: 0:00:27.082612 2025-07-04 17:26:16.083423 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-04 17:26:16.083654 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-04 17:26:16.083724 | orchestrator | Starting galaxy collection install process 2025-07-04 17:26:16.083774 | orchestrator | Process install dependency map 2025-07-04 17:26:16.083931 | orchestrator | Starting collection install process 2025-07-04 17:26:16.083979 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-07-04 17:26:16.084013 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-07-04 17:26:16.084045 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-04 17:26:16.084098 | orchestrator | ok: Item: services Runtime: 0:00:11.542246 2025-07-04 17:26:16.108513 | 2025-07-04 17:26:16.108702 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-04 17:26:26.672557 | orchestrator | ok 2025-07-04 17:26:26.681053 | 2025-07-04 17:26:26.681166 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-04 17:27:26.727696 | orchestrator | ok 2025-07-04 17:27:26.737896 | 2025-07-04 17:27:26.738025 | TASK [Fetch manager ssh hostkey] 2025-07-04 17:27:28.323088 | orchestrator | Output suppressed because no_log was given 2025-07-04 17:27:28.339997 | 2025-07-04 17:27:28.340227 | TASK [Get ssh keypair from terraform environment] 2025-07-04 17:27:28.880184 | orchestrator | ok: Runtime: 0:00:00.008607 2025-07-04 17:27:28.898068 | 2025-07-04 17:27:28.898241 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-04 17:27:28.944798 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-04 17:27:28.954001 | 2025-07-04 17:27:28.954126 | TASK [Run manager part 0] 2025-07-04 17:27:31.014991 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-04 17:27:31.105484 | orchestrator | 2025-07-04 17:27:31.105603 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-04 17:27:31.105632 | orchestrator | 2025-07-04 17:27:31.105676 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-04 17:27:32.978173 | orchestrator | ok: [testbed-manager] 2025-07-04 17:27:32.978255 | orchestrator | 2025-07-04 17:27:32.978301 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-04 17:27:32.978323 | orchestrator | 2025-07-04 17:27:32.978342 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-04 17:27:35.138998 | orchestrator | ok: [testbed-manager] 2025-07-04 17:27:35.139045 | orchestrator | 2025-07-04 17:27:35.139052 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-04 17:27:35.846615 | orchestrator | ok: [testbed-manager] 2025-07-04 17:27:35.846691 | orchestrator | 2025-07-04 17:27:35.846707 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-04 17:27:35.921469 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:27:35.921523 | orchestrator | 2025-07-04 17:27:35.921534 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-04 17:27:35.956808 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:27:35.956861 | orchestrator | 2025-07-04 17:27:35.956869 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-04 17:27:35.987545 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:27:35.987593 | orchestrator | 2025-07-04 17:27:35.987599 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-04 17:27:36.013054 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:27:36.013130 | orchestrator | 2025-07-04 17:27:36.013146 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-04 17:27:36.047083 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:27:36.047132 | orchestrator | 2025-07-04 17:27:36.047140 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-04 17:27:36.093244 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:27:36.093292 | orchestrator | 2025-07-04 17:27:36.093300 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-04 17:27:36.128686 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:27:36.128760 | orchestrator | 2025-07-04 17:27:36.128774 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-04 17:27:36.944097 | orchestrator | changed: [testbed-manager] 2025-07-04 17:27:36.944137 | orchestrator | 2025-07-04 17:27:36.944143 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-04 17:31:05.434706 | orchestrator | changed: [testbed-manager] 2025-07-04 17:31:05.434776 | orchestrator | 2025-07-04 17:31:05.434793 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-04 17:32:34.489889 | orchestrator | changed: [testbed-manager] 2025-07-04 17:32:34.489985 | orchestrator | 2025-07-04 17:32:34.490003 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-04 17:32:55.689162 | orchestrator | changed: [testbed-manager] 2025-07-04 17:32:55.689216 | orchestrator | 2025-07-04 17:32:55.689227 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-04 17:33:04.885853 | orchestrator | changed: [testbed-manager] 2025-07-04 17:33:04.885905 | orchestrator | 2025-07-04 17:33:04.885913 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-04 17:33:04.928872 | orchestrator | ok: [testbed-manager] 2025-07-04 17:33:04.928955 | orchestrator | 2025-07-04 17:33:04.928972 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-04 17:33:05.762963 | orchestrator | ok: [testbed-manager] 2025-07-04 17:33:05.763216 | orchestrator | 2025-07-04 17:33:05.763240 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-04 17:33:06.574284 | orchestrator | changed: [testbed-manager] 2025-07-04 17:33:06.574370 | orchestrator | 2025-07-04 17:33:06.574387 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-04 17:33:13.315013 | orchestrator | changed: [testbed-manager] 2025-07-04 17:33:13.315115 | orchestrator | 2025-07-04 17:33:13.315191 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-04 17:33:19.732327 | orchestrator | changed: [testbed-manager] 2025-07-04 17:33:19.732406 | orchestrator | 2025-07-04 17:33:19.732420 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-04 17:33:22.565540 | orchestrator | changed: [testbed-manager] 2025-07-04 17:33:22.565625 | orchestrator | 2025-07-04 17:33:22.565639 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-04 17:33:24.354608 | orchestrator | changed: [testbed-manager] 2025-07-04 17:33:24.354686 | orchestrator | 2025-07-04 17:33:24.354697 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-04 17:33:25.577152 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-04 17:33:25.577236 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-04 17:33:25.577250 | orchestrator | 2025-07-04 17:33:25.577263 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-04 17:33:25.621031 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-04 17:33:25.621111 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-04 17:33:25.621126 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-04 17:33:25.621140 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-04 17:33:37.716939 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-04 17:33:37.717030 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-04 17:33:37.717043 | orchestrator | 2025-07-04 17:33:37.717054 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-04 17:33:38.317195 | orchestrator | changed: [testbed-manager] 2025-07-04 17:33:38.317286 | orchestrator | 2025-07-04 17:33:38.317303 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-04 17:33:59.892966 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-04 17:33:59.893008 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-04 17:33:59.893015 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-04 17:33:59.893020 | orchestrator | 2025-07-04 17:33:59.893025 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-04 17:34:02.283235 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-04 17:34:02.283359 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-04 17:34:02.283365 | orchestrator | 2025-07-04 17:34:02.283370 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-04 17:34:02.283375 | orchestrator | 2025-07-04 17:34:02.283379 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-04 17:34:03.756128 | orchestrator | ok: [testbed-manager] 2025-07-04 17:34:03.756229 | orchestrator | 2025-07-04 17:34:03.756248 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-04 17:34:03.806754 | orchestrator | ok: [testbed-manager] 2025-07-04 17:34:03.806842 | orchestrator | 2025-07-04 17:34:03.806853 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-04 17:34:03.879236 | orchestrator | ok: [testbed-manager] 2025-07-04 17:34:03.879296 | orchestrator | 2025-07-04 17:34:03.879307 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-04 17:34:04.740560 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:04.740650 | orchestrator | 2025-07-04 17:34:04.740666 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-04 17:34:05.498183 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:05.498279 | orchestrator | 2025-07-04 17:34:05.498297 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-04 17:34:06.948460 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-04 17:34:06.948531 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-04 17:34:06.948546 | orchestrator | 2025-07-04 17:34:06.948579 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-04 17:34:08.415193 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:08.415310 | orchestrator | 2025-07-04 17:34:08.415329 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-04 17:34:10.284017 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-04 17:34:10.284107 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-04 17:34:10.284121 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-04 17:34:10.284134 | orchestrator | 2025-07-04 17:34:10.284147 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-04 17:34:10.340660 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:34:10.340851 | orchestrator | 2025-07-04 17:34:10.340872 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-04 17:34:10.925288 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:10.925402 | orchestrator | 2025-07-04 17:34:10.925432 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-04 17:34:10.996944 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:34:10.997159 | orchestrator | 2025-07-04 17:34:10.997179 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-04 17:34:11.924745 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-04 17:34:11.924859 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:11.924878 | orchestrator | 2025-07-04 17:34:11.924892 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-04 17:34:11.961906 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:34:11.961993 | orchestrator | 2025-07-04 17:34:11.962010 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-04 17:34:11.999322 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:34:11.999397 | orchestrator | 2025-07-04 17:34:11.999413 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-04 17:34:12.035054 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:34:12.035139 | orchestrator | 2025-07-04 17:34:12.035156 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-04 17:34:12.088041 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:34:12.088125 | orchestrator | 2025-07-04 17:34:12.088144 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-04 17:34:12.840986 | orchestrator | ok: [testbed-manager] 2025-07-04 17:34:12.841075 | orchestrator | 2025-07-04 17:34:12.841095 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-04 17:34:12.841110 | orchestrator | 2025-07-04 17:34:12.841235 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-04 17:34:14.282208 | orchestrator | ok: [testbed-manager] 2025-07-04 17:34:14.282304 | orchestrator | 2025-07-04 17:34:14.282319 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-04 17:34:15.290311 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:15.290432 | orchestrator | 2025-07-04 17:34:15.290449 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:34:15.290464 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-04 17:34:15.290477 | orchestrator | 2025-07-04 17:34:15.750411 | orchestrator | ok: Runtime: 0:06:46.135427 2025-07-04 17:34:15.771326 | 2025-07-04 17:34:15.771512 | TASK [Point out that the log in on the manager is now possible] 2025-07-04 17:34:15.805314 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-04 17:34:15.813085 | 2025-07-04 17:34:15.813226 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-04 17:34:15.845681 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-04 17:34:15.853050 | 2025-07-04 17:34:15.853182 | TASK [Run manager part 1 + 2] 2025-07-04 17:34:16.712521 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-04 17:34:16.774094 | orchestrator | 2025-07-04 17:34:16.774146 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-04 17:34:16.774153 | orchestrator | 2025-07-04 17:34:16.774166 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-04 17:34:19.468964 | orchestrator | ok: [testbed-manager] 2025-07-04 17:34:19.469019 | orchestrator | 2025-07-04 17:34:19.469041 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-04 17:34:19.510547 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:34:19.510604 | orchestrator | 2025-07-04 17:34:19.510616 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-04 17:34:19.546729 | orchestrator | ok: [testbed-manager] 2025-07-04 17:34:19.546775 | orchestrator | 2025-07-04 17:34:19.546782 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-04 17:34:19.592737 | orchestrator | ok: [testbed-manager] 2025-07-04 17:34:19.592794 | orchestrator | 2025-07-04 17:34:19.592804 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-04 17:34:19.687665 | orchestrator | ok: [testbed-manager] 2025-07-04 17:34:19.687728 | orchestrator | 2025-07-04 17:34:19.687740 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-04 17:34:19.755931 | orchestrator | ok: [testbed-manager] 2025-07-04 17:34:19.755994 | orchestrator | 2025-07-04 17:34:19.756006 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-04 17:34:19.813709 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-04 17:34:19.813759 | orchestrator | 2025-07-04 17:34:19.813765 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-04 17:34:20.553709 | orchestrator | ok: [testbed-manager] 2025-07-04 17:34:20.553798 | orchestrator | 2025-07-04 17:34:20.553850 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-04 17:34:20.601381 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:34:20.601463 | orchestrator | 2025-07-04 17:34:20.601477 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-04 17:34:22.041938 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:22.042009 | orchestrator | 2025-07-04 17:34:22.042068 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-04 17:34:22.627764 | orchestrator | ok: [testbed-manager] 2025-07-04 17:34:22.627842 | orchestrator | 2025-07-04 17:34:22.627852 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-04 17:34:23.834801 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:23.834919 | orchestrator | 2025-07-04 17:34:23.834930 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-04 17:34:37.449279 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:37.449375 | orchestrator | 2025-07-04 17:34:37.449391 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-04 17:34:38.219679 | orchestrator | ok: [testbed-manager] 2025-07-04 17:34:38.219729 | orchestrator | 2025-07-04 17:34:38.219736 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-04 17:34:38.271255 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:34:38.271319 | orchestrator | 2025-07-04 17:34:38.271518 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-04 17:34:39.266609 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:39.266654 | orchestrator | 2025-07-04 17:34:39.266664 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-04 17:34:40.245687 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:40.245779 | orchestrator | 2025-07-04 17:34:40.245797 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-04 17:34:40.854075 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:40.854165 | orchestrator | 2025-07-04 17:34:40.854181 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-04 17:34:40.897257 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-04 17:34:40.897363 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-04 17:34:40.897379 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-04 17:34:40.897394 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-04 17:34:47.410848 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:47.411118 | orchestrator | 2025-07-04 17:34:47.411135 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-04 17:34:56.870075 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-04 17:34:56.870173 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-04 17:34:56.870185 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-04 17:34:56.870193 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-04 17:34:56.870207 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-04 17:34:56.870214 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-04 17:34:56.870222 | orchestrator | 2025-07-04 17:34:56.870230 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-04 17:34:57.950238 | orchestrator | changed: [testbed-manager] 2025-07-04 17:34:57.950277 | orchestrator | 2025-07-04 17:34:57.950284 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-04 17:34:57.995146 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:34:57.995188 | orchestrator | 2025-07-04 17:34:57.995195 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-04 17:35:01.374798 | orchestrator | changed: [testbed-manager] 2025-07-04 17:35:01.374901 | orchestrator | 2025-07-04 17:35:01.374909 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-04 17:35:01.419883 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:35:01.419923 | orchestrator | 2025-07-04 17:35:01.419931 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-04 17:36:49.093268 | orchestrator | changed: [testbed-manager] 2025-07-04 17:36:49.093393 | orchestrator | 2025-07-04 17:36:49.093414 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-04 17:36:50.267950 | orchestrator | ok: [testbed-manager] 2025-07-04 17:36:50.268044 | orchestrator | 2025-07-04 17:36:50.268062 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:36:50.268076 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-04 17:36:50.268088 | orchestrator | 2025-07-04 17:36:50.483769 | orchestrator | ok: Runtime: 0:02:34.183969 2025-07-04 17:36:50.503478 | 2025-07-04 17:36:50.503709 | TASK [Reboot manager] 2025-07-04 17:36:52.046356 | orchestrator | ok: Runtime: 0:00:01.023219 2025-07-04 17:36:52.062778 | 2025-07-04 17:36:52.063005 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-04 17:37:06.614764 | orchestrator | ok 2025-07-04 17:37:06.624178 | 2025-07-04 17:37:06.624303 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-04 17:38:06.661085 | orchestrator | ok 2025-07-04 17:38:06.669557 | 2025-07-04 17:38:06.669713 | TASK [Deploy manager + bootstrap nodes] 2025-07-04 17:38:09.418012 | orchestrator | 2025-07-04 17:38:09.418154 | orchestrator | # DEPLOY MANAGER 2025-07-04 17:38:09.418163 | orchestrator | 2025-07-04 17:38:09.418169 | orchestrator | + set -e 2025-07-04 17:38:09.418174 | orchestrator | + echo 2025-07-04 17:38:09.418180 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-04 17:38:09.418186 | orchestrator | + echo 2025-07-04 17:38:09.418207 | orchestrator | + cat /opt/manager-vars.sh 2025-07-04 17:38:09.421685 | orchestrator | export NUMBER_OF_NODES=6 2025-07-04 17:38:09.421711 | orchestrator | 2025-07-04 17:38:09.421717 | orchestrator | export CEPH_VERSION=reef 2025-07-04 17:38:09.421724 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-04 17:38:09.421729 | orchestrator | export MANAGER_VERSION=9.1.0 2025-07-04 17:38:09.421740 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-04 17:38:09.421745 | orchestrator | 2025-07-04 17:38:09.421753 | orchestrator | export ARA=false 2025-07-04 17:38:09.421757 | orchestrator | export DEPLOY_MODE=manager 2025-07-04 17:38:09.421765 | orchestrator | export TEMPEST=false 2025-07-04 17:38:09.421769 | orchestrator | export IS_ZUUL=true 2025-07-04 17:38:09.421774 | orchestrator | 2025-07-04 17:38:09.421781 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-07-04 17:38:09.421786 | orchestrator | export EXTERNAL_API=false 2025-07-04 17:38:09.421791 | orchestrator | 2025-07-04 17:38:09.421795 | orchestrator | export IMAGE_USER=ubuntu 2025-07-04 17:38:09.421802 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-04 17:38:09.421806 | orchestrator | 2025-07-04 17:38:09.421810 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-04 17:38:09.421986 | orchestrator | 2025-07-04 17:38:09.421998 | orchestrator | + echo 2025-07-04 17:38:09.422005 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-04 17:38:09.422999 | orchestrator | ++ export INTERACTIVE=false 2025-07-04 17:38:09.423019 | orchestrator | ++ INTERACTIVE=false 2025-07-04 17:38:09.423024 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-04 17:38:09.423030 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-04 17:38:09.423305 | orchestrator | + source /opt/manager-vars.sh 2025-07-04 17:38:09.423316 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-04 17:38:09.423321 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-04 17:38:09.423326 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-04 17:38:09.423331 | orchestrator | ++ CEPH_VERSION=reef 2025-07-04 17:38:09.423338 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-04 17:38:09.423345 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-04 17:38:09.423352 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-04 17:38:09.423359 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-04 17:38:09.423366 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-04 17:38:09.423381 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-04 17:38:09.423389 | orchestrator | ++ export ARA=false 2025-07-04 17:38:09.423396 | orchestrator | ++ ARA=false 2025-07-04 17:38:09.423403 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-04 17:38:09.423410 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-04 17:38:09.423418 | orchestrator | ++ export TEMPEST=false 2025-07-04 17:38:09.423425 | orchestrator | ++ TEMPEST=false 2025-07-04 17:38:09.423433 | orchestrator | ++ export IS_ZUUL=true 2025-07-04 17:38:09.423440 | orchestrator | ++ IS_ZUUL=true 2025-07-04 17:38:09.423448 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-07-04 17:38:09.423457 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-07-04 17:38:09.423473 | orchestrator | ++ export EXTERNAL_API=false 2025-07-04 17:38:09.423481 | orchestrator | ++ EXTERNAL_API=false 2025-07-04 17:38:09.423489 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-04 17:38:09.423496 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-04 17:38:09.423504 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-04 17:38:09.423511 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-04 17:38:09.423519 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-04 17:38:09.423526 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-04 17:38:09.423531 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-04 17:38:09.484226 | orchestrator | + docker version 2025-07-04 17:38:09.775186 | orchestrator | Client: Docker Engine - Community 2025-07-04 17:38:09.775270 | orchestrator | Version: 27.5.1 2025-07-04 17:38:09.775284 | orchestrator | API version: 1.47 2025-07-04 17:38:09.775292 | orchestrator | Go version: go1.22.11 2025-07-04 17:38:09.775301 | orchestrator | Git commit: 9f9e405 2025-07-04 17:38:09.775309 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-04 17:38:09.775318 | orchestrator | OS/Arch: linux/amd64 2025-07-04 17:38:09.775327 | orchestrator | Context: default 2025-07-04 17:38:09.775335 | orchestrator | 2025-07-04 17:38:09.775343 | orchestrator | Server: Docker Engine - Community 2025-07-04 17:38:09.775365 | orchestrator | Engine: 2025-07-04 17:38:09.775373 | orchestrator | Version: 27.5.1 2025-07-04 17:38:09.775381 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-04 17:38:09.775416 | orchestrator | Go version: go1.22.11 2025-07-04 17:38:09.775424 | orchestrator | Git commit: 4c9b3b0 2025-07-04 17:38:09.775432 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-04 17:38:09.775440 | orchestrator | OS/Arch: linux/amd64 2025-07-04 17:38:09.775448 | orchestrator | Experimental: false 2025-07-04 17:38:09.775456 | orchestrator | containerd: 2025-07-04 17:38:09.775464 | orchestrator | Version: 1.7.27 2025-07-04 17:38:09.775476 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-04 17:38:09.775489 | orchestrator | runc: 2025-07-04 17:38:09.775502 | orchestrator | Version: 1.2.5 2025-07-04 17:38:09.775516 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-04 17:38:09.775529 | orchestrator | docker-init: 2025-07-04 17:38:09.775541 | orchestrator | Version: 0.19.0 2025-07-04 17:38:09.775554 | orchestrator | GitCommit: de40ad0 2025-07-04 17:38:09.779284 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-04 17:38:09.792985 | orchestrator | + set -e 2025-07-04 17:38:09.793088 | orchestrator | + source /opt/manager-vars.sh 2025-07-04 17:38:09.793111 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-04 17:38:09.793130 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-04 17:38:09.793148 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-04 17:38:09.793166 | orchestrator | ++ CEPH_VERSION=reef 2025-07-04 17:38:09.793185 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-04 17:38:09.793205 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-04 17:38:09.793249 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-04 17:38:09.793269 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-04 17:38:09.793287 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-04 17:38:09.793307 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-04 17:38:09.793325 | orchestrator | ++ export ARA=false 2025-07-04 17:38:09.793344 | orchestrator | ++ ARA=false 2025-07-04 17:38:09.793355 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-04 17:38:09.793367 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-04 17:38:09.793378 | orchestrator | ++ export TEMPEST=false 2025-07-04 17:38:09.793389 | orchestrator | ++ TEMPEST=false 2025-07-04 17:38:09.793399 | orchestrator | ++ export IS_ZUUL=true 2025-07-04 17:38:09.793410 | orchestrator | ++ IS_ZUUL=true 2025-07-04 17:38:09.793421 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-07-04 17:38:09.793433 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-07-04 17:38:09.793444 | orchestrator | ++ export EXTERNAL_API=false 2025-07-04 17:38:09.793455 | orchestrator | ++ EXTERNAL_API=false 2025-07-04 17:38:09.793476 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-04 17:38:09.793488 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-04 17:38:09.793499 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-04 17:38:09.793510 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-04 17:38:09.793521 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-04 17:38:09.793532 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-04 17:38:09.793543 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-04 17:38:09.793554 | orchestrator | ++ export INTERACTIVE=false 2025-07-04 17:38:09.793564 | orchestrator | ++ INTERACTIVE=false 2025-07-04 17:38:09.793575 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-04 17:38:09.793590 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-04 17:38:09.793601 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-07-04 17:38:09.793613 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.1.0 2025-07-04 17:38:09.801822 | orchestrator | + set -e 2025-07-04 17:38:09.801924 | orchestrator | + VERSION=9.1.0 2025-07-04 17:38:09.801941 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-07-04 17:38:09.810270 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-07-04 17:38:09.810313 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-07-04 17:38:09.814395 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-07-04 17:38:09.819190 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-07-04 17:38:09.827524 | orchestrator | + set -e 2025-07-04 17:38:09.827669 | orchestrator | /opt/configuration ~ 2025-07-04 17:38:09.827701 | orchestrator | + pushd /opt/configuration 2025-07-04 17:38:09.827722 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-04 17:38:09.830655 | orchestrator | + source /opt/venv/bin/activate 2025-07-04 17:38:09.833009 | orchestrator | ++ deactivate nondestructive 2025-07-04 17:38:09.833045 | orchestrator | ++ '[' -n '' ']' 2025-07-04 17:38:09.833061 | orchestrator | ++ '[' -n '' ']' 2025-07-04 17:38:09.833099 | orchestrator | ++ hash -r 2025-07-04 17:38:09.833110 | orchestrator | ++ '[' -n '' ']' 2025-07-04 17:38:09.833121 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-04 17:38:09.833132 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-04 17:38:09.833144 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-04 17:38:09.833155 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-04 17:38:09.833166 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-04 17:38:09.833177 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-04 17:38:09.833188 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-04 17:38:09.833200 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-04 17:38:09.833211 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-04 17:38:09.833223 | orchestrator | ++ export PATH 2025-07-04 17:38:09.833234 | orchestrator | ++ '[' -n '' ']' 2025-07-04 17:38:09.833245 | orchestrator | ++ '[' -z '' ']' 2025-07-04 17:38:09.833256 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-04 17:38:09.833267 | orchestrator | ++ PS1='(venv) ' 2025-07-04 17:38:09.833278 | orchestrator | ++ export PS1 2025-07-04 17:38:09.833289 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-04 17:38:09.833300 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-04 17:38:09.833311 | orchestrator | ++ hash -r 2025-07-04 17:38:09.833322 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-07-04 17:38:11.116870 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-07-04 17:38:11.117645 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.4) 2025-07-04 17:38:11.119116 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-07-04 17:38:11.120498 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-07-04 17:38:11.121926 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-07-04 17:38:11.134587 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-07-04 17:38:11.136157 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-07-04 17:38:11.137073 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-07-04 17:38:11.138400 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-07-04 17:38:11.171349 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-07-04 17:38:11.172900 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-07-04 17:38:11.174605 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-07-04 17:38:11.175783 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.6.15) 2025-07-04 17:38:11.180129 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-07-04 17:38:11.392268 | orchestrator | ++ which gilt 2025-07-04 17:38:11.456272 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-07-04 17:38:11.456343 | orchestrator | + /opt/venv/bin/gilt overlay 2025-07-04 17:38:11.633474 | orchestrator | osism.cfg-generics: 2025-07-04 17:38:11.812086 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-07-04 17:38:11.812190 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-07-04 17:38:11.812704 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-07-04 17:38:11.812890 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-07-04 17:38:12.411989 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-07-04 17:38:12.422895 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-07-04 17:38:12.738288 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-07-04 17:38:12.787511 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-04 17:38:12.787602 | orchestrator | + deactivate 2025-07-04 17:38:12.787617 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-04 17:38:12.787631 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-04 17:38:12.787642 | orchestrator | + export PATH 2025-07-04 17:38:12.787654 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-04 17:38:12.787666 | orchestrator | + '[' -n '' ']' 2025-07-04 17:38:12.787680 | orchestrator | + hash -r 2025-07-04 17:38:12.787691 | orchestrator | + '[' -n '' ']' 2025-07-04 17:38:12.787702 | orchestrator | + unset VIRTUAL_ENV 2025-07-04 17:38:12.787713 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-04 17:38:12.787724 | orchestrator | ~ 2025-07-04 17:38:12.787736 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-04 17:38:12.787748 | orchestrator | + unset -f deactivate 2025-07-04 17:38:12.787759 | orchestrator | + popd 2025-07-04 17:38:12.790427 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-07-04 17:38:12.790490 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-04 17:38:12.790819 | orchestrator | ++ semver 9.1.0 7.0.0 2025-07-04 17:38:12.857674 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-04 17:38:12.857776 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-04 17:38:12.857794 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-04 17:38:12.957214 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-04 17:38:12.957309 | orchestrator | + source /opt/venv/bin/activate 2025-07-04 17:38:12.957323 | orchestrator | ++ deactivate nondestructive 2025-07-04 17:38:12.957336 | orchestrator | ++ '[' -n '' ']' 2025-07-04 17:38:12.957347 | orchestrator | ++ '[' -n '' ']' 2025-07-04 17:38:12.957358 | orchestrator | ++ hash -r 2025-07-04 17:38:12.957370 | orchestrator | ++ '[' -n '' ']' 2025-07-04 17:38:12.957382 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-04 17:38:12.957393 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-04 17:38:12.957404 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-04 17:38:12.957416 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-04 17:38:12.957427 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-04 17:38:12.957438 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-04 17:38:12.957450 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-04 17:38:12.957461 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-04 17:38:12.957473 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-04 17:38:12.957511 | orchestrator | ++ export PATH 2025-07-04 17:38:12.957524 | orchestrator | ++ '[' -n '' ']' 2025-07-04 17:38:12.957535 | orchestrator | ++ '[' -z '' ']' 2025-07-04 17:38:12.957546 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-04 17:38:12.957557 | orchestrator | ++ PS1='(venv) ' 2025-07-04 17:38:12.957568 | orchestrator | ++ export PS1 2025-07-04 17:38:12.957579 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-04 17:38:12.957590 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-04 17:38:12.957601 | orchestrator | ++ hash -r 2025-07-04 17:38:12.957613 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-04 17:38:14.249231 | orchestrator | 2025-07-04 17:38:14.249337 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-04 17:38:14.249354 | orchestrator | 2025-07-04 17:38:14.249367 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-04 17:38:14.850782 | orchestrator | ok: [testbed-manager] 2025-07-04 17:38:14.850927 | orchestrator | 2025-07-04 17:38:14.850946 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-04 17:38:15.913573 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:15.913678 | orchestrator | 2025-07-04 17:38:15.913696 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-04 17:38:15.913709 | orchestrator | 2025-07-04 17:38:15.913721 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-04 17:38:18.327817 | orchestrator | ok: [testbed-manager] 2025-07-04 17:38:18.327972 | orchestrator | 2025-07-04 17:38:18.328001 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-04 17:38:18.386282 | orchestrator | ok: [testbed-manager] 2025-07-04 17:38:18.386528 | orchestrator | 2025-07-04 17:38:18.386565 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-04 17:38:18.891239 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:18.891336 | orchestrator | 2025-07-04 17:38:18.891355 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-04 17:38:18.941143 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:38:18.941259 | orchestrator | 2025-07-04 17:38:18.941301 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-04 17:38:19.292250 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:19.292335 | orchestrator | 2025-07-04 17:38:19.292348 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-04 17:38:19.350424 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:38:19.350496 | orchestrator | 2025-07-04 17:38:19.350505 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-04 17:38:19.701942 | orchestrator | ok: [testbed-manager] 2025-07-04 17:38:19.702086 | orchestrator | 2025-07-04 17:38:19.702105 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-04 17:38:19.814406 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:38:19.814500 | orchestrator | 2025-07-04 17:38:19.814517 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-04 17:38:19.814529 | orchestrator | 2025-07-04 17:38:19.814541 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-04 17:38:21.673522 | orchestrator | ok: [testbed-manager] 2025-07-04 17:38:21.673614 | orchestrator | 2025-07-04 17:38:21.673631 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-04 17:38:21.774687 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-04 17:38:21.774772 | orchestrator | 2025-07-04 17:38:21.774783 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-04 17:38:21.840195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-04 17:38:21.840306 | orchestrator | 2025-07-04 17:38:21.840323 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-04 17:38:23.006441 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-04 17:38:23.006532 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-04 17:38:23.006548 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-04 17:38:23.006559 | orchestrator | 2025-07-04 17:38:23.006571 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-04 17:38:24.872280 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-04 17:38:24.872360 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-04 17:38:24.872372 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-04 17:38:24.872384 | orchestrator | 2025-07-04 17:38:24.872396 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-04 17:38:25.527697 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-04 17:38:25.527800 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:25.527817 | orchestrator | 2025-07-04 17:38:25.527853 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-04 17:38:26.201751 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-04 17:38:26.201861 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:26.201876 | orchestrator | 2025-07-04 17:38:26.201887 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-04 17:38:26.266753 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:38:26.266930 | orchestrator | 2025-07-04 17:38:26.266950 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-04 17:38:26.658755 | orchestrator | ok: [testbed-manager] 2025-07-04 17:38:26.658825 | orchestrator | 2025-07-04 17:38:26.658851 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-04 17:38:26.737519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-04 17:38:26.737630 | orchestrator | 2025-07-04 17:38:26.737653 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-04 17:38:27.912502 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:27.912598 | orchestrator | 2025-07-04 17:38:27.912614 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-04 17:38:28.753750 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:28.753889 | orchestrator | 2025-07-04 17:38:28.753916 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-04 17:38:40.394928 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:40.395034 | orchestrator | 2025-07-04 17:38:40.395068 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-04 17:38:40.445100 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:38:40.445205 | orchestrator | 2025-07-04 17:38:40.445218 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-04 17:38:40.445227 | orchestrator | 2025-07-04 17:38:40.445234 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-04 17:38:42.331000 | orchestrator | ok: [testbed-manager] 2025-07-04 17:38:42.331111 | orchestrator | 2025-07-04 17:38:42.331128 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-04 17:38:42.449102 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-04 17:38:42.449242 | orchestrator | 2025-07-04 17:38:42.449263 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-04 17:38:42.516817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-04 17:38:42.516957 | orchestrator | 2025-07-04 17:38:42.516973 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-04 17:38:45.236949 | orchestrator | ok: [testbed-manager] 2025-07-04 17:38:45.237026 | orchestrator | 2025-07-04 17:38:45.237034 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-04 17:38:45.300115 | orchestrator | ok: [testbed-manager] 2025-07-04 17:38:45.300194 | orchestrator | 2025-07-04 17:38:45.300204 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-04 17:38:45.436870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-04 17:38:45.436949 | orchestrator | 2025-07-04 17:38:45.436957 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-04 17:38:48.503599 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-04 17:38:48.503734 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-04 17:38:48.503761 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-04 17:38:48.503774 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-04 17:38:48.503785 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-04 17:38:48.503797 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-04 17:38:48.503808 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-04 17:38:48.503819 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-04 17:38:48.503830 | orchestrator | 2025-07-04 17:38:48.503898 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-04 17:38:49.176746 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:49.177796 | orchestrator | 2025-07-04 17:38:49.177894 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-04 17:38:49.849618 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:49.849725 | orchestrator | 2025-07-04 17:38:49.849741 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-04 17:38:49.929163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-04 17:38:49.929260 | orchestrator | 2025-07-04 17:38:49.929276 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-04 17:38:51.198113 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-04 17:38:51.198208 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-04 17:38:51.198223 | orchestrator | 2025-07-04 17:38:51.198236 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-04 17:38:51.876073 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:51.876171 | orchestrator | 2025-07-04 17:38:51.876187 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-04 17:38:51.943022 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:38:51.943117 | orchestrator | 2025-07-04 17:38:51.943132 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-04 17:38:52.004767 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-04 17:38:52.004944 | orchestrator | 2025-07-04 17:38:52.004964 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-04 17:38:53.455918 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-04 17:38:53.456030 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-04 17:38:53.456046 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:53.456061 | orchestrator | 2025-07-04 17:38:53.456073 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-04 17:38:54.150600 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:54.150710 | orchestrator | 2025-07-04 17:38:54.150727 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-04 17:38:54.219436 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:38:54.220336 | orchestrator | 2025-07-04 17:38:54.220365 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-04 17:38:54.330367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-04 17:38:54.330464 | orchestrator | 2025-07-04 17:38:54.330481 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-04 17:38:54.875023 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:54.875129 | orchestrator | 2025-07-04 17:38:54.875148 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-04 17:38:55.287502 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:55.287610 | orchestrator | 2025-07-04 17:38:55.287628 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-04 17:38:56.559203 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-04 17:38:56.559338 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-04 17:38:56.559366 | orchestrator | 2025-07-04 17:38:56.559381 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-04 17:38:57.232202 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:57.232304 | orchestrator | 2025-07-04 17:38:57.232321 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-04 17:38:57.642394 | orchestrator | ok: [testbed-manager] 2025-07-04 17:38:57.642507 | orchestrator | 2025-07-04 17:38:57.642524 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-04 17:38:58.015630 | orchestrator | changed: [testbed-manager] 2025-07-04 17:38:58.015738 | orchestrator | 2025-07-04 17:38:58.015755 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-04 17:38:58.068421 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:38:58.068524 | orchestrator | 2025-07-04 17:38:58.068540 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-04 17:38:58.137615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-04 17:38:58.137722 | orchestrator | 2025-07-04 17:38:58.137746 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-04 17:38:58.193437 | orchestrator | ok: [testbed-manager] 2025-07-04 17:38:58.193529 | orchestrator | 2025-07-04 17:38:58.193544 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-04 17:39:00.304948 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-04 17:39:00.305060 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-04 17:39:00.305071 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-04 17:39:00.305079 | orchestrator | 2025-07-04 17:39:00.305087 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-04 17:39:01.048778 | orchestrator | changed: [testbed-manager] 2025-07-04 17:39:01.048949 | orchestrator | 2025-07-04 17:39:01.048969 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-04 17:39:01.793685 | orchestrator | changed: [testbed-manager] 2025-07-04 17:39:01.793811 | orchestrator | 2025-07-04 17:39:01.793870 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-04 17:39:02.516025 | orchestrator | changed: [testbed-manager] 2025-07-04 17:39:02.516122 | orchestrator | 2025-07-04 17:39:02.516136 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-04 17:39:02.585568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-04 17:39:02.585668 | orchestrator | 2025-07-04 17:39:02.585687 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-04 17:39:02.632631 | orchestrator | ok: [testbed-manager] 2025-07-04 17:39:02.632737 | orchestrator | 2025-07-04 17:39:02.632755 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-04 17:39:03.362917 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-04 17:39:03.363051 | orchestrator | 2025-07-04 17:39:03.363078 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-04 17:39:03.437728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-04 17:39:03.437819 | orchestrator | 2025-07-04 17:39:03.437831 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-04 17:39:04.166281 | orchestrator | changed: [testbed-manager] 2025-07-04 17:39:04.166393 | orchestrator | 2025-07-04 17:39:04.166417 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-04 17:39:04.814099 | orchestrator | ok: [testbed-manager] 2025-07-04 17:39:04.814228 | orchestrator | 2025-07-04 17:39:04.814255 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-04 17:39:04.879757 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:39:04.879896 | orchestrator | 2025-07-04 17:39:04.879914 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-04 17:39:04.943280 | orchestrator | ok: [testbed-manager] 2025-07-04 17:39:04.943378 | orchestrator | 2025-07-04 17:39:04.943392 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-04 17:39:05.782269 | orchestrator | changed: [testbed-manager] 2025-07-04 17:39:05.782372 | orchestrator | 2025-07-04 17:39:05.782389 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-04 17:40:14.566516 | orchestrator | changed: [testbed-manager] 2025-07-04 17:40:14.566642 | orchestrator | 2025-07-04 17:40:14.566668 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-04 17:40:15.557759 | orchestrator | ok: [testbed-manager] 2025-07-04 17:40:15.557930 | orchestrator | 2025-07-04 17:40:15.557959 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-04 17:40:15.615461 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:40:15.615592 | orchestrator | 2025-07-04 17:40:15.615619 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-04 17:40:18.115156 | orchestrator | changed: [testbed-manager] 2025-07-04 17:40:18.115261 | orchestrator | 2025-07-04 17:40:18.115278 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-04 17:40:18.180902 | orchestrator | ok: [testbed-manager] 2025-07-04 17:40:18.181005 | orchestrator | 2025-07-04 17:40:18.181020 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-04 17:40:18.181033 | orchestrator | 2025-07-04 17:40:18.181045 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-04 17:40:18.231685 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:40:18.231785 | orchestrator | 2025-07-04 17:40:18.231831 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-04 17:41:18.282339 | orchestrator | Pausing for 60 seconds 2025-07-04 17:41:18.282463 | orchestrator | changed: [testbed-manager] 2025-07-04 17:41:18.282480 | orchestrator | 2025-07-04 17:41:18.282493 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-04 17:41:21.894795 | orchestrator | changed: [testbed-manager] 2025-07-04 17:41:21.894857 | orchestrator | 2025-07-04 17:41:21.894880 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-04 17:42:03.684016 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-04 17:42:03.684128 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-04 17:42:03.684145 | orchestrator | changed: [testbed-manager] 2025-07-04 17:42:03.684159 | orchestrator | 2025-07-04 17:42:03.684171 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-04 17:42:12.967200 | orchestrator | changed: [testbed-manager] 2025-07-04 17:42:12.967322 | orchestrator | 2025-07-04 17:42:12.967366 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-04 17:42:13.050526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-04 17:42:13.050693 | orchestrator | 2025-07-04 17:42:13.050711 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-04 17:42:13.050724 | orchestrator | 2025-07-04 17:42:13.050736 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-04 17:42:13.113337 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:42:13.113448 | orchestrator | 2025-07-04 17:42:13.113464 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:42:13.113477 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-04 17:42:13.113489 | orchestrator | 2025-07-04 17:42:13.224866 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-04 17:42:13.225022 | orchestrator | + deactivate 2025-07-04 17:42:13.225038 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-04 17:42:13.225052 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-04 17:42:13.225063 | orchestrator | + export PATH 2025-07-04 17:42:13.225079 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-04 17:42:13.225092 | orchestrator | + '[' -n '' ']' 2025-07-04 17:42:13.225105 | orchestrator | + hash -r 2025-07-04 17:42:13.225116 | orchestrator | + '[' -n '' ']' 2025-07-04 17:42:13.225127 | orchestrator | + unset VIRTUAL_ENV 2025-07-04 17:42:13.225139 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-04 17:42:13.225150 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-04 17:42:13.225161 | orchestrator | + unset -f deactivate 2025-07-04 17:42:13.225173 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-04 17:42:13.230382 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-04 17:42:13.230446 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-04 17:42:13.230458 | orchestrator | + local max_attempts=60 2025-07-04 17:42:13.230470 | orchestrator | + local name=ceph-ansible 2025-07-04 17:42:13.230481 | orchestrator | + local attempt_num=1 2025-07-04 17:42:13.231489 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:42:13.268676 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-04 17:42:13.268776 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-04 17:42:13.268792 | orchestrator | + local max_attempts=60 2025-07-04 17:42:13.268806 | orchestrator | + local name=kolla-ansible 2025-07-04 17:42:13.268817 | orchestrator | + local attempt_num=1 2025-07-04 17:42:13.269711 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-04 17:42:13.310503 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-04 17:42:13.310604 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-04 17:42:13.310622 | orchestrator | + local max_attempts=60 2025-07-04 17:42:13.310636 | orchestrator | + local name=osism-ansible 2025-07-04 17:42:13.310649 | orchestrator | + local attempt_num=1 2025-07-04 17:42:13.311706 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-04 17:42:13.346570 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-04 17:42:13.346757 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-04 17:42:13.346772 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-04 17:42:14.106535 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-04 17:42:14.346170 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-04 17:42:14.346263 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-04 17:42:14.346278 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-04 17:42:14.346288 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-04 17:42:14.346299 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-04 17:42:14.346309 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-04 17:42:14.346318 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-04 17:42:14.346327 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-07-04 17:42:14.346335 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-04 17:42:14.346344 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-04 17:42:14.346353 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-04 17:42:14.346362 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-04 17:42:14.346370 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-04 17:42:14.346379 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-04 17:42:14.346388 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-04 17:42:14.354391 | orchestrator | ++ semver 9.1.0 7.0.0 2025-07-04 17:42:14.413073 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-04 17:42:14.413177 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-04 17:42:14.417483 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-04 17:42:16.224459 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:42:16.224581 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:42:16.224596 | orchestrator | Registering Redlock._release_script 2025-07-04 17:42:16.441715 | orchestrator | 2025-07-04 17:42:16 | INFO  | Task 8b2662bb-b459-4ed4-b7e6-649def15a5a2 (resolvconf) was prepared for execution. 2025-07-04 17:42:16.441809 | orchestrator | 2025-07-04 17:42:16 | INFO  | It takes a moment until task 8b2662bb-b459-4ed4-b7e6-649def15a5a2 (resolvconf) has been started and output is visible here. 2025-07-04 17:42:20.634478 | orchestrator | 2025-07-04 17:42:20.636901 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-04 17:42:20.638701 | orchestrator | 2025-07-04 17:42:20.641405 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-04 17:42:20.641501 | orchestrator | Friday 04 July 2025 17:42:20 +0000 (0:00:00.157) 0:00:00.157 *********** 2025-07-04 17:42:24.567083 | orchestrator | ok: [testbed-manager] 2025-07-04 17:42:24.567324 | orchestrator | 2025-07-04 17:42:24.568594 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-04 17:42:24.569482 | orchestrator | Friday 04 July 2025 17:42:24 +0000 (0:00:03.937) 0:00:04.094 *********** 2025-07-04 17:42:24.630410 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:42:24.630745 | orchestrator | 2025-07-04 17:42:24.632272 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-04 17:42:24.633682 | orchestrator | Friday 04 July 2025 17:42:24 +0000 (0:00:00.064) 0:00:04.159 *********** 2025-07-04 17:42:24.720396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-04 17:42:24.721862 | orchestrator | 2025-07-04 17:42:24.723347 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-04 17:42:24.725648 | orchestrator | Friday 04 July 2025 17:42:24 +0000 (0:00:00.089) 0:00:04.248 *********** 2025-07-04 17:42:24.810285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-04 17:42:24.811462 | orchestrator | 2025-07-04 17:42:24.812462 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-04 17:42:24.813464 | orchestrator | Friday 04 July 2025 17:42:24 +0000 (0:00:00.088) 0:00:04.337 *********** 2025-07-04 17:42:25.940029 | orchestrator | ok: [testbed-manager] 2025-07-04 17:42:25.941427 | orchestrator | 2025-07-04 17:42:25.941896 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-04 17:42:25.942642 | orchestrator | Friday 04 July 2025 17:42:25 +0000 (0:00:01.129) 0:00:05.467 *********** 2025-07-04 17:42:26.001273 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:42:26.002418 | orchestrator | 2025-07-04 17:42:26.002983 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-04 17:42:26.004452 | orchestrator | Friday 04 July 2025 17:42:25 +0000 (0:00:00.060) 0:00:05.527 *********** 2025-07-04 17:42:27.497771 | orchestrator | ok: [testbed-manager] 2025-07-04 17:42:27.498183 | orchestrator | 2025-07-04 17:42:27.499021 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-04 17:42:27.500026 | orchestrator | Friday 04 July 2025 17:42:27 +0000 (0:00:01.496) 0:00:07.024 *********** 2025-07-04 17:42:27.575218 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:42:27.576241 | orchestrator | 2025-07-04 17:42:27.577601 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-04 17:42:27.578655 | orchestrator | Friday 04 July 2025 17:42:27 +0000 (0:00:00.075) 0:00:07.100 *********** 2025-07-04 17:42:28.118005 | orchestrator | changed: [testbed-manager] 2025-07-04 17:42:28.118862 | orchestrator | 2025-07-04 17:42:28.119916 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-04 17:42:28.120952 | orchestrator | Friday 04 July 2025 17:42:28 +0000 (0:00:00.546) 0:00:07.647 *********** 2025-07-04 17:42:29.091629 | orchestrator | changed: [testbed-manager] 2025-07-04 17:42:29.092291 | orchestrator | 2025-07-04 17:42:29.093010 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-04 17:42:29.093935 | orchestrator | Friday 04 July 2025 17:42:29 +0000 (0:00:00.971) 0:00:08.618 *********** 2025-07-04 17:42:29.940815 | orchestrator | ok: [testbed-manager] 2025-07-04 17:42:29.941650 | orchestrator | 2025-07-04 17:42:29.942621 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-04 17:42:29.943417 | orchestrator | Friday 04 July 2025 17:42:29 +0000 (0:00:00.849) 0:00:09.468 *********** 2025-07-04 17:42:30.019498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-04 17:42:30.019667 | orchestrator | 2025-07-04 17:42:30.020950 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-04 17:42:30.021432 | orchestrator | Friday 04 July 2025 17:42:30 +0000 (0:00:00.080) 0:00:09.548 *********** 2025-07-04 17:42:31.063951 | orchestrator | changed: [testbed-manager] 2025-07-04 17:42:31.064638 | orchestrator | 2025-07-04 17:42:31.065480 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:42:31.066002 | orchestrator | 2025-07-04 17:42:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:42:31.066730 | orchestrator | 2025-07-04 17:42:31 | INFO  | Please wait and do not abort execution. 2025-07-04 17:42:31.067563 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-04 17:42:31.068643 | orchestrator | 2025-07-04 17:42:31.069576 | orchestrator | 2025-07-04 17:42:31.070356 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:42:31.071303 | orchestrator | Friday 04 July 2025 17:42:31 +0000 (0:00:01.044) 0:00:10.592 *********** 2025-07-04 17:42:31.071753 | orchestrator | =============================================================================== 2025-07-04 17:42:31.072353 | orchestrator | Gathering Facts --------------------------------------------------------- 3.94s 2025-07-04 17:42:31.072918 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 1.50s 2025-07-04 17:42:31.073277 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2025-07-04 17:42:31.073907 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.04s 2025-07-04 17:42:31.074397 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.97s 2025-07-04 17:42:31.074819 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.85s 2025-07-04 17:42:31.075381 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-07-04 17:42:31.075833 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-07-04 17:42:31.076447 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-07-04 17:42:31.077022 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-07-04 17:42:31.077820 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-07-04 17:42:31.078385 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-07-04 17:42:31.078805 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-07-04 17:42:31.411579 | orchestrator | + osism apply sshconfig 2025-07-04 17:42:32.887657 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:42:32.887745 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:42:32.887764 | orchestrator | Registering Redlock._release_script 2025-07-04 17:42:32.938007 | orchestrator | 2025-07-04 17:42:32 | INFO  | Task 07b01004-ae47-4e83-a1e5-739c4bd15b22 (sshconfig) was prepared for execution. 2025-07-04 17:42:32.938170 | orchestrator | 2025-07-04 17:42:32 | INFO  | It takes a moment until task 07b01004-ae47-4e83-a1e5-739c4bd15b22 (sshconfig) has been started and output is visible here. 2025-07-04 17:42:36.765307 | orchestrator | 2025-07-04 17:42:36.766184 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-04 17:42:36.767975 | orchestrator | 2025-07-04 17:42:36.768798 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-04 17:42:36.769353 | orchestrator | Friday 04 July 2025 17:42:36 +0000 (0:00:00.168) 0:00:00.168 *********** 2025-07-04 17:42:37.316684 | orchestrator | ok: [testbed-manager] 2025-07-04 17:42:37.317587 | orchestrator | 2025-07-04 17:42:37.320316 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-04 17:42:37.320364 | orchestrator | Friday 04 July 2025 17:42:37 +0000 (0:00:00.556) 0:00:00.724 *********** 2025-07-04 17:42:37.848006 | orchestrator | changed: [testbed-manager] 2025-07-04 17:42:37.848803 | orchestrator | 2025-07-04 17:42:37.849552 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-04 17:42:37.850486 | orchestrator | Friday 04 July 2025 17:42:37 +0000 (0:00:00.531) 0:00:01.256 *********** 2025-07-04 17:42:43.810338 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-04 17:42:43.812141 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-04 17:42:43.812207 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-04 17:42:43.813391 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-04 17:42:43.814226 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-04 17:42:43.815220 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-04 17:42:43.816136 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-04 17:42:43.816907 | orchestrator | 2025-07-04 17:42:43.818077 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-04 17:42:43.818594 | orchestrator | Friday 04 July 2025 17:42:43 +0000 (0:00:05.962) 0:00:07.218 *********** 2025-07-04 17:42:43.878547 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:42:43.879970 | orchestrator | 2025-07-04 17:42:43.880745 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-04 17:42:43.881867 | orchestrator | Friday 04 July 2025 17:42:43 +0000 (0:00:00.069) 0:00:07.287 *********** 2025-07-04 17:42:44.498222 | orchestrator | changed: [testbed-manager] 2025-07-04 17:42:44.498679 | orchestrator | 2025-07-04 17:42:44.499999 | orchestrator | 2025-07-04 17:42:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:42:44.500850 | orchestrator | 2025-07-04 17:42:44 | INFO  | Please wait and do not abort execution. 2025-07-04 17:42:44.501250 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:42:44.501522 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:42:44.503099 | orchestrator | 2025-07-04 17:42:44.503148 | orchestrator | 2025-07-04 17:42:44.504127 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:42:44.504663 | orchestrator | Friday 04 July 2025 17:42:44 +0000 (0:00:00.619) 0:00:07.907 *********** 2025-07-04 17:42:44.505386 | orchestrator | =============================================================================== 2025-07-04 17:42:44.506724 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.96s 2025-07-04 17:42:44.508169 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.62s 2025-07-04 17:42:44.509136 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-07-04 17:42:44.510140 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-07-04 17:42:44.510942 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-07-04 17:42:45.006230 | orchestrator | + osism apply known-hosts 2025-07-04 17:42:46.578539 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:42:46.578654 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:42:46.578671 | orchestrator | Registering Redlock._release_script 2025-07-04 17:42:46.643065 | orchestrator | 2025-07-04 17:42:46 | INFO  | Task fbe55f70-ef49-4ea4-b324-0f7c9465fd32 (known-hosts) was prepared for execution. 2025-07-04 17:42:46.643126 | orchestrator | 2025-07-04 17:42:46 | INFO  | It takes a moment until task fbe55f70-ef49-4ea4-b324-0f7c9465fd32 (known-hosts) has been started and output is visible here. 2025-07-04 17:42:50.294835 | orchestrator | 2025-07-04 17:42:50.297922 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-04 17:42:50.300514 | orchestrator | 2025-07-04 17:42:50.302113 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-04 17:42:50.303060 | orchestrator | Friday 04 July 2025 17:42:50 +0000 (0:00:00.162) 0:00:00.162 *********** 2025-07-04 17:42:57.112435 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-04 17:42:57.114187 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-04 17:42:57.115015 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-04 17:42:57.116396 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-04 17:42:57.117618 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-04 17:42:57.118216 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-04 17:42:57.119659 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-04 17:42:57.120042 | orchestrator | 2025-07-04 17:42:57.120525 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-04 17:42:57.121744 | orchestrator | Friday 04 July 2025 17:42:57 +0000 (0:00:06.820) 0:00:06.982 *********** 2025-07-04 17:42:57.324745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-04 17:42:57.325692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-04 17:42:57.327404 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-04 17:42:57.328230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-04 17:42:57.329086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-04 17:42:57.329866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-04 17:42:57.330593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-04 17:42:57.331163 | orchestrator | 2025-07-04 17:42:57.331551 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:42:57.332266 | orchestrator | Friday 04 July 2025 17:42:57 +0000 (0:00:00.212) 0:00:07.194 *********** 2025-07-04 17:42:58.589709 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJn8UMPWpBqR94csrVGDyzQb7EwKKnEz2Yi6Rshrebv7d1TOmDHBbWTpEi4bQWyZ5m+Sq0etDhnabTLyTqFpTa8=) 2025-07-04 17:42:58.589825 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxOowavFeT+D6ED9M8crXrpGbttCfe08QZUJM1Dz6AbF6S0SUCln3GpmJKKWYLMA9O9Vi9sOhKItscv0pNnOCA/4BjTt9KN+gWDFf2t47CY1MuUrmfxFgqGT01Dg/3W4cCjo1SPkA6kCgfrOojQDLBHxAdXBuQQh5OS37Gu0N9SeoOgeKLS5/vJlW8XL0pgds831iIbSR8wrdGxYixB540GpYfIfCFS5tal7GYpDVvJkC4pFM5aETdOoIfqZpGr4cGTWEAkQnbWWnUERJxzQMxx7/xPztMYw7AOiOV57Ls9UlUbjZZlJ9IzQsSdkftU3usyIR2nDrGyZjuAW2IKyDdLv5dtoz5JE0L4Z9g1tAr3Ae/5uEUnPgW8+3fBiibLqgR+mHS6ngC1VHRqjK/LqyPIyAXQgMJQjUy86s8QuSaYKdqi5vbwAOZ/SwJUNwg5GpPGguZpCsWILjLOJiyURsfXXIplXj9pK6uY+egFv6aseWUMy9nK3yaVe5zWAnxkEs=) 2025-07-04 17:42:58.592133 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMBbHl/EMVowEy21jpzBVAp79BWxgS2mZEZEjF0POIXq) 2025-07-04 17:42:58.592640 | orchestrator | 2025-07-04 17:42:58.593597 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:42:58.594288 | orchestrator | Friday 04 July 2025 17:42:58 +0000 (0:00:01.265) 0:00:08.460 *********** 2025-07-04 17:42:59.691439 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDiVzyurXC2JNhtBAxHApZ5TK89IcjLTCHphwD2GTvyjNFuXLkBazuHL68Pbzn/vEiYmBHIs2/BXNk8Gnx1sSUBhvZtzZuXtIizykq2f3fynP5WvUHNyxd0hWWRg4FzSFYZM0pi6ClujPEnZvtsXKlNjUyvU74XiHDgDXr05g7Iwi0o0tY8mcn14zs+AlM/NQgZDuN/YaWosNFLMMquAQ8rzTOcizw+q6P4Sa4q1sTGp9vh/345xTAaRuQFXe6u9NuX0YTy5xDhi8OfbI2RJ0a4/RU6ymT7s0y/bD+ax+FNXI7wpX66rr6XY2rbRR3h6jUkW61TiM7UxEKl78NQwp+Hlnw+s9/wSzFK97XfhrgcdO/FEgWOR0oyfUaF5yaGLrjJa6++WGCAhs+9njjSioGUayEFOCt1JtpoykUBEwjqe7eMrymk2hUsVViEJ8fwAUtcpxqBsCKY4IfXv3phyXU5+OniUzoP3A1HPzcl0imGqSzdiP/wUxLz3X8WabFWla0=) 2025-07-04 17:42:59.691582 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFuV7FeMGnLn47slGv3cgPOCTQtQpcVNX2UWDh6mUrjeyxo8aQrheWEYhA2+QP2d3OcATbG8hZMtPUXQHWtDuSY=) 2025-07-04 17:42:59.692447 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANEQxShl7z9kfv5cjpYk2VlCTTxVDNB4e8ek7nEyOfy) 2025-07-04 17:42:59.694528 | orchestrator | 2025-07-04 17:42:59.695770 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:42:59.695790 | orchestrator | Friday 04 July 2025 17:42:59 +0000 (0:00:01.100) 0:00:09.560 *********** 2025-07-04 17:43:00.809957 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChZzBuKPG1mE/O+WU/Zciq1f9ttG8Pe89ipEq+1D1mw9usok02lr1ySbTwBfZIhzOfEa1Spf+jFVRwk66NV6TKgwPNnD9tIGMov7dU0ykNVMhrKJrgcyBs3diXRm3tZMDWm6aerq9uotxgvwPAz4saeIX0crG/V2wvvt/xNhmAb0zVkfVjiweERbwcVSLwBi6INR1JOwzqAWJUx8w32YyxThVs+f5/RFQubddHNLC5ldy2mcqnoWgHli654i8CkZl32EykUIPOp/B7wrHsc1di+RtZKI4Aa5/MXLKk95VxFVLMqpVwMB7HEUCNU+VtzI9zcuOUx4PpEq+V8KMPCAsP/walrWa/RTGoBXlw6Datkf53uTQEsLuVFZDGMsigPmUz3q1Ia9wHew6cB0yeZwHHdSfKo3gHXG1Fij+gDUKUn76GlkaLtRwjyEgtZeEUldwesDLZdtwZaM8YwR4wSeyQ5gOngzwKSvUkVtDdCI2SWRnoxSB38ef/zwZmC/px8n8=) 2025-07-04 17:43:00.810511 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNF+ZjUHEUdyAgSZGosnlNttR04Jx5+8I6z1zI3cra1S/5RmNMvZz0Kr685Lh80ZFouNbIGxD1XNp1ASPfG2nwg=) 2025-07-04 17:43:00.811384 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOqIIBUkCh8Csei3vrS7Gurw86b0f7m55ufE2LwX2M10) 2025-07-04 17:43:00.811938 | orchestrator | 2025-07-04 17:43:00.812990 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:43:00.813667 | orchestrator | Friday 04 July 2025 17:43:00 +0000 (0:00:01.117) 0:00:10.678 *********** 2025-07-04 17:43:01.895181 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6gEckc7khXEOZoevgqD92O6Nwv/HKroeaSddMmsBhUBaeWVtZ2bbC21wH22xqp/bEvmwiEkLKwjGr3gkZyLLCpOCYhUJghmvrj19QoQzre1GsNOQlIakowe5lYeWqatU0U/JwFh82sV9DelTewJXF0RiMy0cuzFTuXxcRkrn+zRSGcmr7FjqsVEKHPePDISWyiAt0fMhTgVHlbB9YT/fYHXRKE+qLzh/LL9vjqypLqNyij/4dV15EAtyLaNcZVU0HQQxeDD8iRiSHzt/sX7xrwVSK/a+VUgJ0I7oI9kb9SzKrq8eANJC9s32nxXM4psTMcLwmYYlIk01FcPtU/sPct5bVUhGMO6yiKMs9gbZ5OwUTRSiyEyNIm/mN976TJXmTXb87AVg/5hw/SQGkTTfOgKW4JKwK9CnFHwCZR9wPfCWyp7Ms61AXJlsVCfElxnhqoJhfb2LvYBPOm6GHlKjAdYltS5EjnoxuLB2xTGOXscQ9Tvvq626TYxc3D3EGgvU=) 2025-07-04 17:43:01.895279 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEB2Dep48/+WvV592F5zTy/gnTaUVDe0LVpVofPjpSeKpmcAvF5uAnwx6JzbNx+TbWgqkhJg459n7SmpXQTnRQM=) 2025-07-04 17:43:01.896367 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIaV6bH8dtMvDJf1oSQUyzdtELy6wbJdIZe7DeGXgRqg) 2025-07-04 17:43:01.897024 | orchestrator | 2025-07-04 17:43:01.898369 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:43:01.899404 | orchestrator | Friday 04 July 2025 17:43:01 +0000 (0:00:01.086) 0:00:11.764 *********** 2025-07-04 17:43:02.993616 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDP450VCd5lH2HygNpIDJu3EaTgjvKjUrsdFRK1xwGbIOz6AnKox6Qgk7g4CkBXCIM2eDiaTEiI6XqC4R4iJ65KLHrzh+JzYeZq0fs/uqI6TJ+QttIFQs7nw8zJsOB/LrnFDRySlBTX2Wt2mXIH8zAeiTsw/UyyZEvRneFuIiF5zuhMX5ThZZ/FW5ez2JAzC6OkRDzLD+i9m7g6pKwWFrh+RZPgOk9YUutNLZUq17WGRmyZbOkKARQ6yXGzRKRCfnOLHurOBxiAh55KLxF/nzDztKlUnzQjaG5O5rJzNPdciPuGK976wdLVR025MZTw1VuLh5woTno5f+rvYMnlwZI9Rw8DN1yusl0kJl6yPNZwqu8/3qDWIfAU1NJNpLEtKXFdZ+EXP04G7ZW2kyk16DTy4vd/wuIbexZxAH6rh0x3In9rlEf1Jv9GFib5YrhPHOKOAPcX3C+AEGSVtJaJrY9MVURFjrzv2TSwKR9dWGTKrsXr8+qr3a40QTw782OB64M=) 2025-07-04 17:43:02.993871 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICNbFxlKMK+dL5344R5DcwM2I07yWlgOOHpYbzBUNZgM) 2025-07-04 17:43:02.995235 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBXpyVdr0gxMvhfQHtMUE/448IfZDoaqGIhFFaTy6Dype67RGf50BMYbyUf64PEfx6DPP5DwvpItwslI3xN1NWc=) 2025-07-04 17:43:02.996096 | orchestrator | 2025-07-04 17:43:02.996502 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:43:02.997287 | orchestrator | Friday 04 July 2025 17:43:02 +0000 (0:00:01.099) 0:00:12.863 *********** 2025-07-04 17:43:04.092419 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwkAjp7uHU2eiNJ5Y3d3ycI9aQy4rnw562astsI35fHQDXUj+HSmwpT+jcnFU3p86o3Plocq1yLKoLkyX91GN8+yG95BjvV1mcNujk4cw3Zdj3LkC9h4jXqbWUfFPXDKgo0V5HzK87LdHupS/RGKj5K4JJ4xBn1WCFKxdLDNBPQzCIfp4aVrhJBpXC8G2JqFynwefMLMJgMUZbQrMf1nUBm5rLOVvRQHLXElGb5kXPKXPIi4yBiv9PDunQO6lDWieM8rBtmqGTtToNnN4emoHuBmrLDwv+HAX0R2zRjGw79K6sQh01ZqArhI6RY5ScwMjNpmDW8XnTAz9dW+viFXahbVXb79lpjie42+uUknpSdBZqVc1x+TeSyIgldyXBrhJ+UVcNG2dd28lTArvXKEAIMy9+TRmq7rWcXpKBWHDXR8TgQDMf9+utrQ5eNlvFPJSFZi6hPoJouuKMf8MBSNQTW6QZBy+ZRsHB7Tzn4bCCVmKCdswGdwav+i+m+MTIBt0=) 2025-07-04 17:43:04.093264 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLywgUiXZvOd0xtf31yZ+Z/pAnT45vr9FjyZv6TrpbeYVLgUJLbFw397++Hmm/x3rwqelKIcWYHd9oVQsrhaVfo=) 2025-07-04 17:43:04.093595 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE83DNLPLsBv3mxg1uWAx/4+kj8w9g96LZjd/YwuaChQ) 2025-07-04 17:43:04.094462 | orchestrator | 2025-07-04 17:43:04.095155 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:43:04.095655 | orchestrator | Friday 04 July 2025 17:43:04 +0000 (0:00:01.098) 0:00:13.962 *********** 2025-07-04 17:43:05.196402 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF1spJUlUqmX8VBVpqZVEVaEgrfDrKzoImH2aH9tOXyu) 2025-07-04 17:43:05.197289 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzo/uxD80ekcQAbu+8sMZHLg2cruvPLAg+MhbnntGAt4FswbkvchxGBatNMSkKHjiAiR+4IUq1yyeIQjsKSTAUJp11u7qK3FJlXH8P0Jo79Ym1yylDRmBf13DqKTqA+8BJLhPvdV2VL89HzIZYYzO5YOhL/GdFQRTuHmL+WaLQAaw8//WeNaCh3HxKUnx4MkHxSuupoC1hk6049mXHA4C5XMbTqWPAZwoFUVmoXfnRvVySTvbfo/Zaq0V1teh3vs7hgUM2HW10yekhkqGN2cOI4trh/PQSyK7NJZXIUQ5fK1vSXxP+ldhQffAUr0lsrvMsKuVfywPk70hsg7xaaJnLyrKb+qOZPUBTPYOHbfmS0zeC47EizJ0kr9QZ82glCxRpAJckxGr/bk6cY+Ndn8TjIty4wvW+tSS3pSEp2zNFLn5htKU9HzXyhHkmi4IyUnnSwkHlXQTIxg4zZ4Y/s4S7g1IwWfx0tS3Ziu/cWDESyFL6wqTOVuVfpewVArWBGg0=) 2025-07-04 17:43:05.197498 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEup0xbQ79Qiqn3Txk2dKKc7xUjKiImY5kfAXRJhakC6XxXVODyIP56K/dSDTgpeEYwDBQRhGwkjm29TURK5zeM=) 2025-07-04 17:43:05.198572 | orchestrator | 2025-07-04 17:43:05.198672 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-07-04 17:43:05.200084 | orchestrator | Friday 04 July 2025 17:43:05 +0000 (0:00:01.103) 0:00:15.065 *********** 2025-07-04 17:43:10.508434 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-04 17:43:10.509296 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-04 17:43:10.510532 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-04 17:43:10.514129 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-04 17:43:10.514174 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-04 17:43:10.516634 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-04 17:43:10.517631 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-04 17:43:10.518345 | orchestrator | 2025-07-04 17:43:10.519015 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-07-04 17:43:10.519723 | orchestrator | Friday 04 July 2025 17:43:10 +0000 (0:00:05.310) 0:00:20.376 *********** 2025-07-04 17:43:10.670281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-04 17:43:10.670688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-04 17:43:10.671525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-04 17:43:10.673205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-04 17:43:10.674423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-04 17:43:10.675807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-04 17:43:10.677082 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-04 17:43:10.678430 | orchestrator | 2025-07-04 17:43:10.679661 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:43:10.680611 | orchestrator | Friday 04 July 2025 17:43:10 +0000 (0:00:00.164) 0:00:20.541 *********** 2025-07-04 17:43:11.747420 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJn8UMPWpBqR94csrVGDyzQb7EwKKnEz2Yi6Rshrebv7d1TOmDHBbWTpEi4bQWyZ5m+Sq0etDhnabTLyTqFpTa8=) 2025-07-04 17:43:11.747761 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxOowavFeT+D6ED9M8crXrpGbttCfe08QZUJM1Dz6AbF6S0SUCln3GpmJKKWYLMA9O9Vi9sOhKItscv0pNnOCA/4BjTt9KN+gWDFf2t47CY1MuUrmfxFgqGT01Dg/3W4cCjo1SPkA6kCgfrOojQDLBHxAdXBuQQh5OS37Gu0N9SeoOgeKLS5/vJlW8XL0pgds831iIbSR8wrdGxYixB540GpYfIfCFS5tal7GYpDVvJkC4pFM5aETdOoIfqZpGr4cGTWEAkQnbWWnUERJxzQMxx7/xPztMYw7AOiOV57Ls9UlUbjZZlJ9IzQsSdkftU3usyIR2nDrGyZjuAW2IKyDdLv5dtoz5JE0L4Z9g1tAr3Ae/5uEUnPgW8+3fBiibLqgR+mHS6ngC1VHRqjK/LqyPIyAXQgMJQjUy86s8QuSaYKdqi5vbwAOZ/SwJUNwg5GpPGguZpCsWILjLOJiyURsfXXIplXj9pK6uY+egFv6aseWUMy9nK3yaVe5zWAnxkEs=) 2025-07-04 17:43:11.749996 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMBbHl/EMVowEy21jpzBVAp79BWxgS2mZEZEjF0POIXq) 2025-07-04 17:43:11.750185 | orchestrator | 2025-07-04 17:43:11.751040 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:43:11.751714 | orchestrator | Friday 04 July 2025 17:43:11 +0000 (0:00:01.075) 0:00:21.616 *********** 2025-07-04 17:43:12.859548 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDiVzyurXC2JNhtBAxHApZ5TK89IcjLTCHphwD2GTvyjNFuXLkBazuHL68Pbzn/vEiYmBHIs2/BXNk8Gnx1sSUBhvZtzZuXtIizykq2f3fynP5WvUHNyxd0hWWRg4FzSFYZM0pi6ClujPEnZvtsXKlNjUyvU74XiHDgDXr05g7Iwi0o0tY8mcn14zs+AlM/NQgZDuN/YaWosNFLMMquAQ8rzTOcizw+q6P4Sa4q1sTGp9vh/345xTAaRuQFXe6u9NuX0YTy5xDhi8OfbI2RJ0a4/RU6ymT7s0y/bD+ax+FNXI7wpX66rr6XY2rbRR3h6jUkW61TiM7UxEKl78NQwp+Hlnw+s9/wSzFK97XfhrgcdO/FEgWOR0oyfUaF5yaGLrjJa6++WGCAhs+9njjSioGUayEFOCt1JtpoykUBEwjqe7eMrymk2hUsVViEJ8fwAUtcpxqBsCKY4IfXv3phyXU5+OniUzoP3A1HPzcl0imGqSzdiP/wUxLz3X8WabFWla0=) 2025-07-04 17:43:12.860612 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFuV7FeMGnLn47slGv3cgPOCTQtQpcVNX2UWDh6mUrjeyxo8aQrheWEYhA2+QP2d3OcATbG8hZMtPUXQHWtDuSY=) 2025-07-04 17:43:12.862430 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANEQxShl7z9kfv5cjpYk2VlCTTxVDNB4e8ek7nEyOfy) 2025-07-04 17:43:12.863091 | orchestrator | 2025-07-04 17:43:12.864227 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:43:12.865336 | orchestrator | Friday 04 July 2025 17:43:12 +0000 (0:00:01.112) 0:00:22.728 *********** 2025-07-04 17:43:13.939039 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOqIIBUkCh8Csei3vrS7Gurw86b0f7m55ufE2LwX2M10) 2025-07-04 17:43:13.939368 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChZzBuKPG1mE/O+WU/Zciq1f9ttG8Pe89ipEq+1D1mw9usok02lr1ySbTwBfZIhzOfEa1Spf+jFVRwk66NV6TKgwPNnD9tIGMov7dU0ykNVMhrKJrgcyBs3diXRm3tZMDWm6aerq9uotxgvwPAz4saeIX0crG/V2wvvt/xNhmAb0zVkfVjiweERbwcVSLwBi6INR1JOwzqAWJUx8w32YyxThVs+f5/RFQubddHNLC5ldy2mcqnoWgHli654i8CkZl32EykUIPOp/B7wrHsc1di+RtZKI4Aa5/MXLKk95VxFVLMqpVwMB7HEUCNU+VtzI9zcuOUx4PpEq+V8KMPCAsP/walrWa/RTGoBXlw6Datkf53uTQEsLuVFZDGMsigPmUz3q1Ia9wHew6cB0yeZwHHdSfKo3gHXG1Fij+gDUKUn76GlkaLtRwjyEgtZeEUldwesDLZdtwZaM8YwR4wSeyQ5gOngzwKSvUkVtDdCI2SWRnoxSB38ef/zwZmC/px8n8=) 2025-07-04 17:43:13.941646 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNF+ZjUHEUdyAgSZGosnlNttR04Jx5+8I6z1zI3cra1S/5RmNMvZz0Kr685Lh80ZFouNbIGxD1XNp1ASPfG2nwg=) 2025-07-04 17:43:13.941678 | orchestrator | 2025-07-04 17:43:13.942793 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:43:13.943248 | orchestrator | Friday 04 July 2025 17:43:13 +0000 (0:00:01.080) 0:00:23.809 *********** 2025-07-04 17:43:15.009637 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6gEckc7khXEOZoevgqD92O6Nwv/HKroeaSddMmsBhUBaeWVtZ2bbC21wH22xqp/bEvmwiEkLKwjGr3gkZyLLCpOCYhUJghmvrj19QoQzre1GsNOQlIakowe5lYeWqatU0U/JwFh82sV9DelTewJXF0RiMy0cuzFTuXxcRkrn+zRSGcmr7FjqsVEKHPePDISWyiAt0fMhTgVHlbB9YT/fYHXRKE+qLzh/LL9vjqypLqNyij/4dV15EAtyLaNcZVU0HQQxeDD8iRiSHzt/sX7xrwVSK/a+VUgJ0I7oI9kb9SzKrq8eANJC9s32nxXM4psTMcLwmYYlIk01FcPtU/sPct5bVUhGMO6yiKMs9gbZ5OwUTRSiyEyNIm/mN976TJXmTXb87AVg/5hw/SQGkTTfOgKW4JKwK9CnFHwCZR9wPfCWyp7Ms61AXJlsVCfElxnhqoJhfb2LvYBPOm6GHlKjAdYltS5EjnoxuLB2xTGOXscQ9Tvvq626TYxc3D3EGgvU=) 2025-07-04 17:43:15.009745 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEB2Dep48/+WvV592F5zTy/gnTaUVDe0LVpVofPjpSeKpmcAvF5uAnwx6JzbNx+TbWgqkhJg459n7SmpXQTnRQM=) 2025-07-04 17:43:15.011135 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIaV6bH8dtMvDJf1oSQUyzdtELy6wbJdIZe7DeGXgRqg) 2025-07-04 17:43:15.011932 | orchestrator | 2025-07-04 17:43:15.012401 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:43:15.013066 | orchestrator | Friday 04 July 2025 17:43:14 +0000 (0:00:01.069) 0:00:24.879 *********** 2025-07-04 17:43:16.077881 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICNbFxlKMK+dL5344R5DcwM2I07yWlgOOHpYbzBUNZgM) 2025-07-04 17:43:16.078130 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDP450VCd5lH2HygNpIDJu3EaTgjvKjUrsdFRK1xwGbIOz6AnKox6Qgk7g4CkBXCIM2eDiaTEiI6XqC4R4iJ65KLHrzh+JzYeZq0fs/uqI6TJ+QttIFQs7nw8zJsOB/LrnFDRySlBTX2Wt2mXIH8zAeiTsw/UyyZEvRneFuIiF5zuhMX5ThZZ/FW5ez2JAzC6OkRDzLD+i9m7g6pKwWFrh+RZPgOk9YUutNLZUq17WGRmyZbOkKARQ6yXGzRKRCfnOLHurOBxiAh55KLxF/nzDztKlUnzQjaG5O5rJzNPdciPuGK976wdLVR025MZTw1VuLh5woTno5f+rvYMnlwZI9Rw8DN1yusl0kJl6yPNZwqu8/3qDWIfAU1NJNpLEtKXFdZ+EXP04G7ZW2kyk16DTy4vd/wuIbexZxAH6rh0x3In9rlEf1Jv9GFib5YrhPHOKOAPcX3C+AEGSVtJaJrY9MVURFjrzv2TSwKR9dWGTKrsXr8+qr3a40QTw782OB64M=) 2025-07-04 17:43:16.078158 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBXpyVdr0gxMvhfQHtMUE/448IfZDoaqGIhFFaTy6Dype67RGf50BMYbyUf64PEfx6DPP5DwvpItwslI3xN1NWc=) 2025-07-04 17:43:16.078208 | orchestrator | 2025-07-04 17:43:16.079109 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:43:16.079876 | orchestrator | Friday 04 July 2025 17:43:16 +0000 (0:00:01.064) 0:00:25.943 *********** 2025-07-04 17:43:18.196659 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLywgUiXZvOd0xtf31yZ+Z/pAnT45vr9FjyZv6TrpbeYVLgUJLbFw397++Hmm/x3rwqelKIcWYHd9oVQsrhaVfo=) 2025-07-04 17:43:18.197437 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwkAjp7uHU2eiNJ5Y3d3ycI9aQy4rnw562astsI35fHQDXUj+HSmwpT+jcnFU3p86o3Plocq1yLKoLkyX91GN8+yG95BjvV1mcNujk4cw3Zdj3LkC9h4jXqbWUfFPXDKgo0V5HzK87LdHupS/RGKj5K4JJ4xBn1WCFKxdLDNBPQzCIfp4aVrhJBpXC8G2JqFynwefMLMJgMUZbQrMf1nUBm5rLOVvRQHLXElGb5kXPKXPIi4yBiv9PDunQO6lDWieM8rBtmqGTtToNnN4emoHuBmrLDwv+HAX0R2zRjGw79K6sQh01ZqArhI6RY5ScwMjNpmDW8XnTAz9dW+viFXahbVXb79lpjie42+uUknpSdBZqVc1x+TeSyIgldyXBrhJ+UVcNG2dd28lTArvXKEAIMy9+TRmq7rWcXpKBWHDXR8TgQDMf9+utrQ5eNlvFPJSFZi6hPoJouuKMf8MBSNQTW6QZBy+ZRsHB7Tzn4bCCVmKCdswGdwav+i+m+MTIBt0=) 2025-07-04 17:43:18.197859 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE83DNLPLsBv3mxg1uWAx/4+kj8w9g96LZjd/YwuaChQ) 2025-07-04 17:43:18.199287 | orchestrator | 2025-07-04 17:43:18.201284 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-04 17:43:18.201839 | orchestrator | Friday 04 July 2025 17:43:18 +0000 (0:00:02.122) 0:00:28.066 *********** 2025-07-04 17:43:19.296466 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzo/uxD80ekcQAbu+8sMZHLg2cruvPLAg+MhbnntGAt4FswbkvchxGBatNMSkKHjiAiR+4IUq1yyeIQjsKSTAUJp11u7qK3FJlXH8P0Jo79Ym1yylDRmBf13DqKTqA+8BJLhPvdV2VL89HzIZYYzO5YOhL/GdFQRTuHmL+WaLQAaw8//WeNaCh3HxKUnx4MkHxSuupoC1hk6049mXHA4C5XMbTqWPAZwoFUVmoXfnRvVySTvbfo/Zaq0V1teh3vs7hgUM2HW10yekhkqGN2cOI4trh/PQSyK7NJZXIUQ5fK1vSXxP+ldhQffAUr0lsrvMsKuVfywPk70hsg7xaaJnLyrKb+qOZPUBTPYOHbfmS0zeC47EizJ0kr9QZ82glCxRpAJckxGr/bk6cY+Ndn8TjIty4wvW+tSS3pSEp2zNFLn5htKU9HzXyhHkmi4IyUnnSwkHlXQTIxg4zZ4Y/s4S7g1IwWfx0tS3Ziu/cWDESyFL6wqTOVuVfpewVArWBGg0=) 2025-07-04 17:43:19.296591 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEup0xbQ79Qiqn3Txk2dKKc7xUjKiImY5kfAXRJhakC6XxXVODyIP56K/dSDTgpeEYwDBQRhGwkjm29TURK5zeM=) 2025-07-04 17:43:19.297454 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF1spJUlUqmX8VBVpqZVEVaEgrfDrKzoImH2aH9tOXyu) 2025-07-04 17:43:19.298755 | orchestrator | 2025-07-04 17:43:19.300025 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-07-04 17:43:19.301068 | orchestrator | Friday 04 July 2025 17:43:19 +0000 (0:00:01.099) 0:00:29.165 *********** 2025-07-04 17:43:19.463865 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-04 17:43:19.464883 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-04 17:43:19.466112 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-04 17:43:19.467499 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-04 17:43:19.467651 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-04 17:43:19.468124 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-04 17:43:19.468689 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-04 17:43:19.469323 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:43:19.469810 | orchestrator | 2025-07-04 17:43:19.470299 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-07-04 17:43:19.470831 | orchestrator | Friday 04 July 2025 17:43:19 +0000 (0:00:00.166) 0:00:29.332 *********** 2025-07-04 17:43:19.534980 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:43:19.535413 | orchestrator | 2025-07-04 17:43:19.536104 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-07-04 17:43:19.536747 | orchestrator | Friday 04 July 2025 17:43:19 +0000 (0:00:00.072) 0:00:29.404 *********** 2025-07-04 17:43:19.589067 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:43:19.589174 | orchestrator | 2025-07-04 17:43:19.589266 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-07-04 17:43:19.590497 | orchestrator | Friday 04 July 2025 17:43:19 +0000 (0:00:00.054) 0:00:29.458 *********** 2025-07-04 17:43:20.113361 | orchestrator | changed: [testbed-manager] 2025-07-04 17:43:20.113967 | orchestrator | 2025-07-04 17:43:20.115166 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:43:20.115313 | orchestrator | 2025-07-04 17:43:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:43:20.115336 | orchestrator | 2025-07-04 17:43:20 | INFO  | Please wait and do not abort execution. 2025-07-04 17:43:20.116703 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-04 17:43:20.117611 | orchestrator | 2025-07-04 17:43:20.119354 | orchestrator | 2025-07-04 17:43:20.119528 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:43:20.120533 | orchestrator | Friday 04 July 2025 17:43:20 +0000 (0:00:00.524) 0:00:29.983 *********** 2025-07-04 17:43:20.121484 | orchestrator | =============================================================================== 2025-07-04 17:43:20.122361 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.82s 2025-07-04 17:43:20.122988 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.31s 2025-07-04 17:43:20.123664 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.12s 2025-07-04 17:43:20.124245 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.27s 2025-07-04 17:43:20.124665 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-07-04 17:43:20.125279 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-07-04 17:43:20.125835 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-07-04 17:43:20.126264 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-07-04 17:43:20.127319 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-07-04 17:43:20.127693 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-07-04 17:43:20.128209 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-07-04 17:43:20.128782 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-07-04 17:43:20.129321 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-07-04 17:43:20.129985 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-07-04 17:43:20.130365 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-07-04 17:43:20.130503 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-07-04 17:43:20.130829 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.52s 2025-07-04 17:43:20.131220 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.21s 2025-07-04 17:43:20.131626 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-07-04 17:43:20.131878 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-07-04 17:43:20.610775 | orchestrator | + osism apply squid 2025-07-04 17:43:22.304773 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:43:22.304960 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:43:22.304992 | orchestrator | Registering Redlock._release_script 2025-07-04 17:43:22.365980 | orchestrator | 2025-07-04 17:43:22 | INFO  | Task b8e602b3-36e0-4429-ada2-b5a4a4c28ac1 (squid) was prepared for execution. 2025-07-04 17:43:22.366185 | orchestrator | 2025-07-04 17:43:22 | INFO  | It takes a moment until task b8e602b3-36e0-4429-ada2-b5a4a4c28ac1 (squid) has been started and output is visible here. 2025-07-04 17:43:26.394986 | orchestrator | 2025-07-04 17:43:26.399300 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-07-04 17:43:26.400717 | orchestrator | 2025-07-04 17:43:26.400747 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-07-04 17:43:26.400813 | orchestrator | Friday 04 July 2025 17:43:26 +0000 (0:00:00.168) 0:00:00.168 *********** 2025-07-04 17:43:26.493176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-07-04 17:43:26.493468 | orchestrator | 2025-07-04 17:43:26.494100 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-07-04 17:43:26.494733 | orchestrator | Friday 04 July 2025 17:43:26 +0000 (0:00:00.101) 0:00:00.270 *********** 2025-07-04 17:43:27.984232 | orchestrator | ok: [testbed-manager] 2025-07-04 17:43:27.984883 | orchestrator | 2025-07-04 17:43:27.985611 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-07-04 17:43:27.986689 | orchestrator | Friday 04 July 2025 17:43:27 +0000 (0:00:01.489) 0:00:01.759 *********** 2025-07-04 17:43:29.172976 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-07-04 17:43:29.173469 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-07-04 17:43:29.174782 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-07-04 17:43:29.176295 | orchestrator | 2025-07-04 17:43:29.177134 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-07-04 17:43:29.177805 | orchestrator | Friday 04 July 2025 17:43:29 +0000 (0:00:01.188) 0:00:02.948 *********** 2025-07-04 17:43:30.270549 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-07-04 17:43:30.271712 | orchestrator | 2025-07-04 17:43:30.273512 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-07-04 17:43:30.274687 | orchestrator | Friday 04 July 2025 17:43:30 +0000 (0:00:01.097) 0:00:04.045 *********** 2025-07-04 17:43:30.649404 | orchestrator | ok: [testbed-manager] 2025-07-04 17:43:30.649510 | orchestrator | 2025-07-04 17:43:30.650824 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-07-04 17:43:30.651468 | orchestrator | Friday 04 July 2025 17:43:30 +0000 (0:00:00.379) 0:00:04.424 *********** 2025-07-04 17:43:31.608540 | orchestrator | changed: [testbed-manager] 2025-07-04 17:43:31.608761 | orchestrator | 2025-07-04 17:43:31.610251 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-07-04 17:43:31.610499 | orchestrator | Friday 04 July 2025 17:43:31 +0000 (0:00:00.959) 0:00:05.384 *********** 2025-07-04 17:44:03.550478 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-07-04 17:44:03.550784 | orchestrator | ok: [testbed-manager] 2025-07-04 17:44:03.550813 | orchestrator | 2025-07-04 17:44:03.551116 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-07-04 17:44:03.552786 | orchestrator | Friday 04 July 2025 17:44:03 +0000 (0:00:31.938) 0:00:37.323 *********** 2025-07-04 17:44:16.150575 | orchestrator | changed: [testbed-manager] 2025-07-04 17:44:16.150696 | orchestrator | 2025-07-04 17:44:16.150713 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-07-04 17:44:16.150854 | orchestrator | Friday 04 July 2025 17:44:16 +0000 (0:00:12.601) 0:00:49.924 *********** 2025-07-04 17:45:16.229467 | orchestrator | Pausing for 60 seconds 2025-07-04 17:45:16.229577 | orchestrator | changed: [testbed-manager] 2025-07-04 17:45:16.229593 | orchestrator | 2025-07-04 17:45:16.229665 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-07-04 17:45:16.229754 | orchestrator | Friday 04 July 2025 17:45:16 +0000 (0:01:00.075) 0:01:50.000 *********** 2025-07-04 17:45:16.299267 | orchestrator | ok: [testbed-manager] 2025-07-04 17:45:16.300218 | orchestrator | 2025-07-04 17:45:16.301546 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-07-04 17:45:16.302686 | orchestrator | Friday 04 July 2025 17:45:16 +0000 (0:00:00.074) 0:01:50.075 *********** 2025-07-04 17:45:16.996382 | orchestrator | changed: [testbed-manager] 2025-07-04 17:45:16.996491 | orchestrator | 2025-07-04 17:45:16.997034 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:45:16.997500 | orchestrator | 2025-07-04 17:45:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:45:16.997591 | orchestrator | 2025-07-04 17:45:16 | INFO  | Please wait and do not abort execution. 2025-07-04 17:45:16.999367 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 17:45:16.999868 | orchestrator | 2025-07-04 17:45:17.000753 | orchestrator | 2025-07-04 17:45:17.001329 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:45:17.001800 | orchestrator | Friday 04 July 2025 17:45:16 +0000 (0:00:00.693) 0:01:50.769 *********** 2025-07-04 17:45:17.002703 | orchestrator | =============================================================================== 2025-07-04 17:45:17.003146 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-07-04 17:45:17.003853 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.94s 2025-07-04 17:45:17.004492 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.60s 2025-07-04 17:45:17.005182 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.49s 2025-07-04 17:45:17.005579 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.19s 2025-07-04 17:45:17.005952 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.10s 2025-07-04 17:45:17.006398 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.96s 2025-07-04 17:45:17.006729 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.69s 2025-07-04 17:45:17.007130 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-07-04 17:45:17.008017 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-07-04 17:45:17.008111 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-07-04 17:45:17.514555 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-07-04 17:45:17.514664 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-07-04 17:45:17.518390 | orchestrator | ++ semver 9.1.0 9.0.0 2025-07-04 17:45:17.598694 | orchestrator | + [[ 1 -lt 0 ]] 2025-07-04 17:45:17.599682 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-07-04 17:45:19.256304 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:45:19.256407 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:45:19.256421 | orchestrator | Registering Redlock._release_script 2025-07-04 17:45:19.315493 | orchestrator | 2025-07-04 17:45:19 | INFO  | Task 81ab6369-23a4-437a-90e5-4afb3f9cbc6a (operator) was prepared for execution. 2025-07-04 17:45:19.315587 | orchestrator | 2025-07-04 17:45:19 | INFO  | It takes a moment until task 81ab6369-23a4-437a-90e5-4afb3f9cbc6a (operator) has been started and output is visible here. 2025-07-04 17:45:23.515006 | orchestrator | 2025-07-04 17:45:23.515628 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-07-04 17:45:23.515869 | orchestrator | 2025-07-04 17:45:23.516171 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-04 17:45:23.517897 | orchestrator | Friday 04 July 2025 17:45:23 +0000 (0:00:00.153) 0:00:00.153 *********** 2025-07-04 17:45:26.754450 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:45:26.754577 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:45:26.755410 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:45:26.756134 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:45:26.756965 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:45:26.757390 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:45:26.758347 | orchestrator | 2025-07-04 17:45:26.758799 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-07-04 17:45:26.759317 | orchestrator | Friday 04 July 2025 17:45:26 +0000 (0:00:03.242) 0:00:03.395 *********** 2025-07-04 17:45:27.480655 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:45:27.480870 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:45:27.482318 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:45:27.483609 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:45:27.484154 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:45:27.486580 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:45:27.486627 | orchestrator | 2025-07-04 17:45:27.486645 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-07-04 17:45:27.487002 | orchestrator | 2025-07-04 17:45:27.487733 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-04 17:45:27.488674 | orchestrator | Friday 04 July 2025 17:45:27 +0000 (0:00:00.725) 0:00:04.121 *********** 2025-07-04 17:45:27.554240 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:45:27.575842 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:45:27.602492 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:45:27.651448 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:45:27.652132 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:45:27.655518 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:45:27.655530 | orchestrator | 2025-07-04 17:45:27.655537 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-04 17:45:27.656332 | orchestrator | Friday 04 July 2025 17:45:27 +0000 (0:00:00.170) 0:00:04.291 *********** 2025-07-04 17:45:27.742660 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:45:27.764236 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:45:27.836233 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:45:27.837554 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:45:27.840742 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:45:27.840806 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:45:27.840827 | orchestrator | 2025-07-04 17:45:27.840848 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-04 17:45:27.842129 | orchestrator | Friday 04 July 2025 17:45:27 +0000 (0:00:00.184) 0:00:04.476 *********** 2025-07-04 17:45:28.457629 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:45:28.459770 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:45:28.460399 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:45:28.461040 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:45:28.461848 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:45:28.462178 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:45:28.463093 | orchestrator | 2025-07-04 17:45:28.463383 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-04 17:45:28.464162 | orchestrator | Friday 04 July 2025 17:45:28 +0000 (0:00:00.621) 0:00:05.098 *********** 2025-07-04 17:45:29.251313 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:45:29.253141 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:45:29.253172 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:45:29.255894 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:45:29.256538 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:45:29.257170 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:45:29.257988 | orchestrator | 2025-07-04 17:45:29.258974 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-04 17:45:29.259616 | orchestrator | Friday 04 July 2025 17:45:29 +0000 (0:00:00.792) 0:00:05.890 *********** 2025-07-04 17:45:30.504208 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-07-04 17:45:30.507284 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-07-04 17:45:30.507364 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-07-04 17:45:30.507378 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-07-04 17:45:30.507390 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-07-04 17:45:30.507913 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-07-04 17:45:30.509362 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-07-04 17:45:30.510209 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-07-04 17:45:30.511460 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-07-04 17:45:30.511621 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-07-04 17:45:30.512732 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-07-04 17:45:30.513439 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-07-04 17:45:30.514174 | orchestrator | 2025-07-04 17:45:30.514910 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-04 17:45:30.515623 | orchestrator | Friday 04 July 2025 17:45:30 +0000 (0:00:01.251) 0:00:07.142 *********** 2025-07-04 17:45:31.686559 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:45:31.689835 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:45:31.689903 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:45:31.689956 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:45:31.689976 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:45:31.689994 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:45:31.690014 | orchestrator | 2025-07-04 17:45:31.690106 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-04 17:45:31.690125 | orchestrator | Friday 04 July 2025 17:45:31 +0000 (0:00:01.184) 0:00:08.326 *********** 2025-07-04 17:45:32.877814 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-07-04 17:45:32.879371 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-07-04 17:45:32.880078 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-07-04 17:45:32.945889 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-07-04 17:45:32.946463 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-07-04 17:45:32.948011 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-07-04 17:45:32.949439 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-07-04 17:45:32.951062 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-07-04 17:45:32.952779 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-07-04 17:45:32.954092 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-07-04 17:45:32.955274 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-07-04 17:45:32.956202 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-07-04 17:45:32.957077 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-07-04 17:45:32.957871 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-07-04 17:45:32.958753 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-07-04 17:45:32.959325 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-07-04 17:45:32.960239 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-07-04 17:45:32.960858 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-07-04 17:45:32.961801 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-07-04 17:45:32.962386 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-07-04 17:45:32.962829 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-07-04 17:45:32.963601 | orchestrator | 2025-07-04 17:45:32.964010 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-04 17:45:32.964565 | orchestrator | Friday 04 July 2025 17:45:32 +0000 (0:00:01.258) 0:00:09.584 *********** 2025-07-04 17:45:33.500205 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:45:33.500421 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:45:33.501566 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:45:33.503533 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:45:33.503890 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:45:33.504973 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:45:33.505373 | orchestrator | 2025-07-04 17:45:33.505883 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-04 17:45:33.506842 | orchestrator | Friday 04 July 2025 17:45:33 +0000 (0:00:00.556) 0:00:10.141 *********** 2025-07-04 17:45:33.585622 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:45:33.609693 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:45:33.635010 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:45:33.692877 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:45:33.693465 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:45:33.694775 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:45:33.696217 | orchestrator | 2025-07-04 17:45:33.697349 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-04 17:45:33.698549 | orchestrator | Friday 04 July 2025 17:45:33 +0000 (0:00:00.192) 0:00:10.333 *********** 2025-07-04 17:45:34.429706 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-04 17:45:34.431454 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-04 17:45:34.431493 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:45:34.432060 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:45:34.432476 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-04 17:45:34.433341 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:45:34.433879 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-04 17:45:34.434860 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:45:34.435524 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-04 17:45:34.436714 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-04 17:45:34.436892 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:45:34.437108 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:45:34.437705 | orchestrator | 2025-07-04 17:45:34.438118 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-04 17:45:34.438872 | orchestrator | Friday 04 July 2025 17:45:34 +0000 (0:00:00.733) 0:00:11.066 *********** 2025-07-04 17:45:34.496335 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:45:34.520355 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:45:34.569503 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:45:34.615847 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:45:34.616093 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:45:34.617280 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:45:34.619041 | orchestrator | 2025-07-04 17:45:34.619563 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-04 17:45:34.620325 | orchestrator | Friday 04 July 2025 17:45:34 +0000 (0:00:00.189) 0:00:11.256 *********** 2025-07-04 17:45:34.675881 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:45:34.699972 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:45:34.721888 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:45:34.792723 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:45:34.793386 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:45:34.793831 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:45:34.795082 | orchestrator | 2025-07-04 17:45:34.795692 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-04 17:45:34.796409 | orchestrator | Friday 04 July 2025 17:45:34 +0000 (0:00:00.175) 0:00:11.432 *********** 2025-07-04 17:45:34.878759 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:45:34.902806 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:45:34.926359 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:45:34.965162 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:45:34.967583 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:45:34.968859 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:45:34.969634 | orchestrator | 2025-07-04 17:45:34.970467 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-04 17:45:34.971550 | orchestrator | Friday 04 July 2025 17:45:34 +0000 (0:00:00.171) 0:00:11.603 *********** 2025-07-04 17:45:35.598084 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:45:35.598253 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:45:35.599995 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:45:35.601286 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:45:35.602512 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:45:35.603239 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:45:35.604232 | orchestrator | 2025-07-04 17:45:35.604977 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-04 17:45:35.605595 | orchestrator | Friday 04 July 2025 17:45:35 +0000 (0:00:00.633) 0:00:12.237 *********** 2025-07-04 17:45:35.677232 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:45:35.725877 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:45:35.820063 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:45:35.820265 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:45:35.820868 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:45:35.821520 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:45:35.822864 | orchestrator | 2025-07-04 17:45:35.823902 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:45:35.823977 | orchestrator | 2025-07-04 17:45:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:45:35.826645 | orchestrator | 2025-07-04 17:45:35 | INFO  | Please wait and do not abort execution. 2025-07-04 17:45:35.827042 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 17:45:35.827439 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 17:45:35.828016 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 17:45:35.828560 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 17:45:35.828905 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 17:45:35.829137 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 17:45:35.829615 | orchestrator | 2025-07-04 17:45:35.830155 | orchestrator | 2025-07-04 17:45:35.830770 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:45:35.831100 | orchestrator | Friday 04 July 2025 17:45:35 +0000 (0:00:00.223) 0:00:12.461 *********** 2025-07-04 17:45:35.831388 | orchestrator | =============================================================================== 2025-07-04 17:45:35.832774 | orchestrator | Gathering Facts --------------------------------------------------------- 3.24s 2025-07-04 17:45:35.833167 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2025-07-04 17:45:35.833747 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.25s 2025-07-04 17:45:35.834099 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2025-07-04 17:45:35.834623 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2025-07-04 17:45:35.835224 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2025-07-04 17:45:35.835504 | orchestrator | Do not require tty for all users ---------------------------------------- 0.73s 2025-07-04 17:45:35.836085 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-07-04 17:45:35.836463 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2025-07-04 17:45:35.837075 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-07-04 17:45:35.837451 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-07-04 17:45:35.838051 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-07-04 17:45:35.838501 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.19s 2025-07-04 17:45:35.838943 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2025-07-04 17:45:35.839323 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2025-07-04 17:45:35.839857 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2025-07-04 17:45:35.840362 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2025-07-04 17:45:36.318704 | orchestrator | + osism apply --environment custom facts 2025-07-04 17:45:37.990313 | orchestrator | 2025-07-04 17:45:37 | INFO  | Trying to run play facts in environment custom 2025-07-04 17:45:37.994434 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:45:37.994477 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:45:37.994489 | orchestrator | Registering Redlock._release_script 2025-07-04 17:45:38.055605 | orchestrator | 2025-07-04 17:45:38 | INFO  | Task 01b1db14-d446-47a5-bb4d-463fc8b2e21c (facts) was prepared for execution. 2025-07-04 17:45:38.055703 | orchestrator | 2025-07-04 17:45:38 | INFO  | It takes a moment until task 01b1db14-d446-47a5-bb4d-463fc8b2e21c (facts) has been started and output is visible here. 2025-07-04 17:45:42.063163 | orchestrator | 2025-07-04 17:45:42.064438 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-07-04 17:45:42.065125 | orchestrator | 2025-07-04 17:45:42.067105 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-04 17:45:42.067735 | orchestrator | Friday 04 July 2025 17:45:42 +0000 (0:00:00.084) 0:00:00.084 *********** 2025-07-04 17:45:43.450397 | orchestrator | ok: [testbed-manager] 2025-07-04 17:45:43.451657 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:45:43.453364 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:45:43.455297 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:45:43.455390 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:45:43.456978 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:45:43.457112 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:45:43.457513 | orchestrator | 2025-07-04 17:45:43.458193 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-07-04 17:45:43.458644 | orchestrator | Friday 04 July 2025 17:45:43 +0000 (0:00:01.385) 0:00:01.470 *********** 2025-07-04 17:45:44.681745 | orchestrator | ok: [testbed-manager] 2025-07-04 17:45:44.683628 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:45:44.684802 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:45:44.686651 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:45:44.687609 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:45:44.689480 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:45:44.689852 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:45:44.690716 | orchestrator | 2025-07-04 17:45:44.691175 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-07-04 17:45:44.691947 | orchestrator | 2025-07-04 17:45:44.692598 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-04 17:45:44.693011 | orchestrator | Friday 04 July 2025 17:45:44 +0000 (0:00:01.233) 0:00:02.704 *********** 2025-07-04 17:45:44.841695 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:45:44.842938 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:45:44.846704 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:45:44.848264 | orchestrator | 2025-07-04 17:45:44.848579 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-04 17:45:44.849748 | orchestrator | Friday 04 July 2025 17:45:44 +0000 (0:00:00.160) 0:00:02.865 *********** 2025-07-04 17:45:45.064720 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:45:45.065144 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:45:45.066093 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:45:45.068890 | orchestrator | 2025-07-04 17:45:45.069316 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-04 17:45:45.072489 | orchestrator | Friday 04 July 2025 17:45:45 +0000 (0:00:00.224) 0:00:03.089 *********** 2025-07-04 17:45:45.301047 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:45:45.303352 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:45:45.303556 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:45:45.303580 | orchestrator | 2025-07-04 17:45:45.304360 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-04 17:45:45.304555 | orchestrator | Friday 04 July 2025 17:45:45 +0000 (0:00:00.235) 0:00:03.324 *********** 2025-07-04 17:45:45.477166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 17:45:45.477409 | orchestrator | 2025-07-04 17:45:45.478406 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-04 17:45:45.479715 | orchestrator | Friday 04 July 2025 17:45:45 +0000 (0:00:00.176) 0:00:03.501 *********** 2025-07-04 17:45:45.966305 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:45:45.967102 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:45:45.968372 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:45:45.969964 | orchestrator | 2025-07-04 17:45:45.971132 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-04 17:45:45.971900 | orchestrator | Friday 04 July 2025 17:45:45 +0000 (0:00:00.489) 0:00:03.990 *********** 2025-07-04 17:45:46.088277 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:45:46.088445 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:45:46.089070 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:45:46.089799 | orchestrator | 2025-07-04 17:45:46.090694 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-04 17:45:46.091857 | orchestrator | Friday 04 July 2025 17:45:46 +0000 (0:00:00.121) 0:00:04.112 *********** 2025-07-04 17:45:47.140383 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:45:47.140512 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:45:47.140689 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:45:47.141386 | orchestrator | 2025-07-04 17:45:47.142452 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-04 17:45:47.142982 | orchestrator | Friday 04 July 2025 17:45:47 +0000 (0:00:01.047) 0:00:05.159 *********** 2025-07-04 17:45:47.597005 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:45:47.598551 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:45:47.599061 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:45:47.600495 | orchestrator | 2025-07-04 17:45:47.601104 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-04 17:45:47.601146 | orchestrator | Friday 04 July 2025 17:45:47 +0000 (0:00:00.459) 0:00:05.619 *********** 2025-07-04 17:45:48.625059 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:45:48.625474 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:45:48.627007 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:45:48.628236 | orchestrator | 2025-07-04 17:45:48.629974 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-04 17:45:48.630563 | orchestrator | Friday 04 July 2025 17:45:48 +0000 (0:00:01.027) 0:00:06.647 *********** 2025-07-04 17:46:02.547959 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:46:02.548075 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:46:02.548090 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:46:02.549292 | orchestrator | 2025-07-04 17:46:02.550256 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-07-04 17:46:02.551432 | orchestrator | Friday 04 July 2025 17:46:02 +0000 (0:00:13.920) 0:00:20.567 *********** 2025-07-04 17:46:02.617109 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:46:02.663988 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:46:02.664490 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:46:02.665871 | orchestrator | 2025-07-04 17:46:02.667438 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-07-04 17:46:02.668075 | orchestrator | Friday 04 July 2025 17:46:02 +0000 (0:00:00.120) 0:00:20.688 *********** 2025-07-04 17:46:09.592069 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:46:09.592178 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:46:09.593873 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:46:09.596104 | orchestrator | 2025-07-04 17:46:09.598429 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-04 17:46:09.599587 | orchestrator | Friday 04 July 2025 17:46:09 +0000 (0:00:06.925) 0:00:27.613 *********** 2025-07-04 17:46:10.021906 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:10.022468 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:10.023166 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:10.023312 | orchestrator | 2025-07-04 17:46:10.024054 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-04 17:46:10.024790 | orchestrator | Friday 04 July 2025 17:46:10 +0000 (0:00:00.432) 0:00:28.045 *********** 2025-07-04 17:46:13.469880 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-07-04 17:46:13.471337 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-07-04 17:46:13.471775 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-07-04 17:46:13.473170 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-07-04 17:46:13.475207 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-07-04 17:46:13.476244 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-07-04 17:46:13.477542 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-07-04 17:46:13.478582 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-07-04 17:46:13.479417 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-07-04 17:46:13.481024 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-07-04 17:46:13.482011 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-07-04 17:46:13.483012 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-07-04 17:46:13.483901 | orchestrator | 2025-07-04 17:46:13.485064 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-04 17:46:13.485797 | orchestrator | Friday 04 July 2025 17:46:13 +0000 (0:00:03.446) 0:00:31.492 *********** 2025-07-04 17:46:14.594413 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:14.595326 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:14.595571 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:14.597045 | orchestrator | 2025-07-04 17:46:14.600993 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-04 17:46:14.601411 | orchestrator | 2025-07-04 17:46:14.602434 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-04 17:46:14.602575 | orchestrator | Friday 04 July 2025 17:46:14 +0000 (0:00:01.124) 0:00:32.616 *********** 2025-07-04 17:46:18.355863 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:46:18.356089 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:46:18.357676 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:46:18.358230 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:18.358908 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:18.359874 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:18.360683 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:18.362178 | orchestrator | 2025-07-04 17:46:18.362350 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:46:18.362714 | orchestrator | 2025-07-04 17:46:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:46:18.362784 | orchestrator | 2025-07-04 17:46:18 | INFO  | Please wait and do not abort execution. 2025-07-04 17:46:18.363974 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 17:46:18.364845 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 17:46:18.365817 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 17:46:18.367002 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 17:46:18.367165 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:46:18.367806 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:46:18.368528 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:46:18.369789 | orchestrator | 2025-07-04 17:46:18.370095 | orchestrator | 2025-07-04 17:46:18.370744 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:46:18.371456 | orchestrator | Friday 04 July 2025 17:46:18 +0000 (0:00:03.763) 0:00:36.380 *********** 2025-07-04 17:46:18.372039 | orchestrator | =============================================================================== 2025-07-04 17:46:18.372495 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.92s 2025-07-04 17:46:18.373020 | orchestrator | Install required packages (Debian) -------------------------------------- 6.93s 2025-07-04 17:46:18.373718 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.76s 2025-07-04 17:46:18.374273 | orchestrator | Copy fact files --------------------------------------------------------- 3.45s 2025-07-04 17:46:18.374540 | orchestrator | Create custom facts directory ------------------------------------------- 1.39s 2025-07-04 17:46:18.374982 | orchestrator | Copy fact file ---------------------------------------------------------- 1.23s 2025-07-04 17:46:18.375540 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.12s 2025-07-04 17:46:18.375744 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2025-07-04 17:46:18.376332 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2025-07-04 17:46:18.376638 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.49s 2025-07-04 17:46:18.377039 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-07-04 17:46:18.377464 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-07-04 17:46:18.378515 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2025-07-04 17:46:18.378770 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2025-07-04 17:46:18.379320 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.18s 2025-07-04 17:46:18.379593 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.16s 2025-07-04 17:46:18.380179 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-07-04 17:46:18.380450 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-07-04 17:46:18.845343 | orchestrator | + osism apply bootstrap 2025-07-04 17:46:20.592887 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:46:20.593052 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:46:20.593069 | orchestrator | Registering Redlock._release_script 2025-07-04 17:46:20.663821 | orchestrator | 2025-07-04 17:46:20 | INFO  | Task 8863d45a-d17d-47ba-a854-668e88378d6a (bootstrap) was prepared for execution. 2025-07-04 17:46:20.663905 | orchestrator | 2025-07-04 17:46:20 | INFO  | It takes a moment until task 8863d45a-d17d-47ba-a854-668e88378d6a (bootstrap) has been started and output is visible here. 2025-07-04 17:46:24.883547 | orchestrator | 2025-07-04 17:46:24.883764 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-07-04 17:46:24.887746 | orchestrator | 2025-07-04 17:46:24.887835 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-07-04 17:46:24.888229 | orchestrator | Friday 04 July 2025 17:46:24 +0000 (0:00:00.169) 0:00:00.169 *********** 2025-07-04 17:46:24.965017 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:24.995091 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:25.021378 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:25.050709 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:25.131976 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:46:25.133252 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:46:25.137509 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:46:25.137557 | orchestrator | 2025-07-04 17:46:25.138603 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-04 17:46:25.139418 | orchestrator | 2025-07-04 17:46:25.140818 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-04 17:46:25.142223 | orchestrator | Friday 04 July 2025 17:46:25 +0000 (0:00:00.251) 0:00:00.420 *********** 2025-07-04 17:46:28.700623 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:46:28.701014 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:46:28.702256 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:46:28.705813 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:28.707149 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:28.707868 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:28.708491 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:28.709103 | orchestrator | 2025-07-04 17:46:28.709425 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-07-04 17:46:28.710103 | orchestrator | 2025-07-04 17:46:28.710557 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-04 17:46:28.710899 | orchestrator | Friday 04 July 2025 17:46:28 +0000 (0:00:03.568) 0:00:03.989 *********** 2025-07-04 17:46:28.795819 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-04 17:46:28.795918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-07-04 17:46:28.833459 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-04 17:46:28.833592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 17:46:28.833695 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-07-04 17:46:28.834149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 17:46:28.836741 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-07-04 17:46:28.839316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 17:46:28.884496 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-04 17:46:28.885831 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-07-04 17:46:28.887314 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-07-04 17:46:28.888156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-04 17:46:28.888426 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-07-04 17:46:28.888812 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-04 17:46:28.889343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-04 17:46:28.889774 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-07-04 17:46:29.152163 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-07-04 17:46:29.154666 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-04 17:46:29.155376 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-04 17:46:29.156399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-04 17:46:29.157446 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:46:29.158968 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-04 17:46:29.160223 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-07-04 17:46:29.161495 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-04 17:46:29.162540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-04 17:46:29.163432 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-04 17:46:29.166295 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-04 17:46:29.166326 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-07-04 17:46:29.166338 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:46:29.166350 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-07-04 17:46:29.167066 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-07-04 17:46:29.167703 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-04 17:46:29.168604 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-07-04 17:46:29.169190 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-04 17:46:29.169836 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:46:29.170469 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-07-04 17:46:29.171289 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-07-04 17:46:29.172084 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-04 17:46:29.172798 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-04 17:46:29.173398 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-07-04 17:46:29.173897 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-04 17:46:29.174309 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-07-04 17:46:29.174805 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-04 17:46:29.175434 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-04 17:46:29.176885 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-04 17:46:29.177112 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:46:29.177730 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-07-04 17:46:29.178647 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-04 17:46:29.178889 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:46:29.179475 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-04 17:46:29.180320 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-04 17:46:29.180580 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-04 17:46:29.181186 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:46:29.181832 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-04 17:46:29.182359 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-04 17:46:29.182878 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:46:29.183750 | orchestrator | 2025-07-04 17:46:29.183780 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-07-04 17:46:29.184287 | orchestrator | 2025-07-04 17:46:29.184704 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-07-04 17:46:29.184956 | orchestrator | Friday 04 July 2025 17:46:29 +0000 (0:00:00.450) 0:00:04.439 *********** 2025-07-04 17:46:30.465692 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:30.465971 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:46:30.466743 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:30.468114 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:30.468873 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:30.469577 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:46:30.470089 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:46:30.470568 | orchestrator | 2025-07-04 17:46:30.471321 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-07-04 17:46:30.471856 | orchestrator | Friday 04 July 2025 17:46:30 +0000 (0:00:01.314) 0:00:05.754 *********** 2025-07-04 17:46:31.725984 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:31.726214 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:31.727044 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:46:31.727972 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:31.728982 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:46:31.729403 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:31.730502 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:46:31.730712 | orchestrator | 2025-07-04 17:46:31.731406 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-07-04 17:46:31.732628 | orchestrator | Friday 04 July 2025 17:46:31 +0000 (0:00:01.258) 0:00:07.012 *********** 2025-07-04 17:46:32.065373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:46:32.065635 | orchestrator | 2025-07-04 17:46:32.067296 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-07-04 17:46:32.069328 | orchestrator | Friday 04 July 2025 17:46:32 +0000 (0:00:00.337) 0:00:07.349 *********** 2025-07-04 17:46:34.193207 | orchestrator | changed: [testbed-manager] 2025-07-04 17:46:34.196368 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:46:34.196417 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:46:34.200735 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:46:34.201749 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:46:34.202738 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:46:34.203901 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:46:34.204747 | orchestrator | 2025-07-04 17:46:34.205690 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-07-04 17:46:34.206439 | orchestrator | Friday 04 July 2025 17:46:34 +0000 (0:00:02.129) 0:00:09.479 *********** 2025-07-04 17:46:34.270463 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:46:34.487897 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:46:34.488976 | orchestrator | 2025-07-04 17:46:34.493665 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-07-04 17:46:34.493753 | orchestrator | Friday 04 July 2025 17:46:34 +0000 (0:00:00.297) 0:00:09.776 *********** 2025-07-04 17:46:35.615480 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:46:35.617730 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:46:35.617762 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:46:35.618448 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:46:35.620020 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:46:35.621010 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:46:35.622236 | orchestrator | 2025-07-04 17:46:35.624504 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-07-04 17:46:35.625085 | orchestrator | Friday 04 July 2025 17:46:35 +0000 (0:00:01.126) 0:00:10.902 *********** 2025-07-04 17:46:35.688223 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:46:36.150356 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:46:36.151283 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:46:36.151414 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:46:36.153217 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:46:36.154693 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:46:36.155049 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:46:36.156400 | orchestrator | 2025-07-04 17:46:36.157368 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-07-04 17:46:36.158531 | orchestrator | Friday 04 July 2025 17:46:36 +0000 (0:00:00.535) 0:00:11.438 *********** 2025-07-04 17:46:36.254273 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:46:36.279129 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:46:36.303452 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:46:36.569106 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:46:36.569263 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:46:36.570465 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:46:36.571573 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:36.573241 | orchestrator | 2025-07-04 17:46:36.573268 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-04 17:46:36.574147 | orchestrator | Friday 04 July 2025 17:46:36 +0000 (0:00:00.417) 0:00:11.856 *********** 2025-07-04 17:46:36.650187 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:46:36.677734 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:46:36.704305 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:46:36.729609 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:46:36.782533 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:46:36.783052 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:46:36.783544 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:46:36.784081 | orchestrator | 2025-07-04 17:46:36.784544 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-04 17:46:36.785194 | orchestrator | Friday 04 July 2025 17:46:36 +0000 (0:00:00.214) 0:00:12.071 *********** 2025-07-04 17:46:37.075848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:46:37.077021 | orchestrator | 2025-07-04 17:46:37.078446 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-04 17:46:37.079297 | orchestrator | Friday 04 July 2025 17:46:37 +0000 (0:00:00.292) 0:00:12.364 *********** 2025-07-04 17:46:37.418359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:46:37.419829 | orchestrator | 2025-07-04 17:46:37.421463 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-04 17:46:37.422853 | orchestrator | Friday 04 July 2025 17:46:37 +0000 (0:00:00.339) 0:00:12.703 *********** 2025-07-04 17:46:38.734134 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:38.734856 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:38.735643 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:38.736641 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:46:38.737529 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:46:38.738232 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:46:38.739017 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:38.739626 | orchestrator | 2025-07-04 17:46:38.740194 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-04 17:46:38.740959 | orchestrator | Friday 04 July 2025 17:46:38 +0000 (0:00:01.317) 0:00:14.021 *********** 2025-07-04 17:46:38.806716 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:46:38.830773 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:46:38.858451 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:46:38.882115 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:46:38.936151 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:46:38.937454 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:46:38.937484 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:46:38.937496 | orchestrator | 2025-07-04 17:46:38.938095 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-04 17:46:38.938457 | orchestrator | Friday 04 July 2025 17:46:38 +0000 (0:00:00.204) 0:00:14.225 *********** 2025-07-04 17:46:39.479313 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:39.483659 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:39.484461 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:39.486126 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:39.486601 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:46:39.487692 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:46:39.489993 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:46:39.490875 | orchestrator | 2025-07-04 17:46:39.491756 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-04 17:46:39.492499 | orchestrator | Friday 04 July 2025 17:46:39 +0000 (0:00:00.540) 0:00:14.765 *********** 2025-07-04 17:46:39.564455 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:46:39.593012 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:46:39.616712 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:46:39.644725 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:46:39.748542 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:46:39.750467 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:46:39.751475 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:46:39.752815 | orchestrator | 2025-07-04 17:46:39.753971 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-04 17:46:39.755183 | orchestrator | Friday 04 July 2025 17:46:39 +0000 (0:00:00.270) 0:00:15.036 *********** 2025-07-04 17:46:40.343665 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:40.343763 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:46:40.344919 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:46:40.345010 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:46:40.345018 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:46:40.345025 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:46:40.345032 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:46:40.345352 | orchestrator | 2025-07-04 17:46:40.345651 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-04 17:46:40.346075 | orchestrator | Friday 04 July 2025 17:46:40 +0000 (0:00:00.592) 0:00:15.628 *********** 2025-07-04 17:46:41.451285 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:41.454264 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:46:41.454376 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:46:41.454452 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:46:41.456454 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:46:41.457345 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:46:41.457723 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:46:41.458763 | orchestrator | 2025-07-04 17:46:41.459535 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-04 17:46:41.460333 | orchestrator | Friday 04 July 2025 17:46:41 +0000 (0:00:01.109) 0:00:16.738 *********** 2025-07-04 17:46:42.598522 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:42.599719 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:46:42.600243 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:42.601311 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:46:42.603118 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:42.604336 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:46:42.604891 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:42.606343 | orchestrator | 2025-07-04 17:46:42.607481 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-04 17:46:42.608176 | orchestrator | Friday 04 July 2025 17:46:42 +0000 (0:00:01.147) 0:00:17.885 *********** 2025-07-04 17:46:43.005565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:46:43.006865 | orchestrator | 2025-07-04 17:46:43.007144 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-04 17:46:43.008792 | orchestrator | Friday 04 July 2025 17:46:42 +0000 (0:00:00.408) 0:00:18.294 *********** 2025-07-04 17:46:43.079456 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:46:44.298819 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:46:44.299126 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:46:44.300652 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:46:44.301988 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:46:44.303185 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:46:44.304805 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:46:44.305443 | orchestrator | 2025-07-04 17:46:44.306426 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-04 17:46:44.307411 | orchestrator | Friday 04 July 2025 17:46:44 +0000 (0:00:01.290) 0:00:19.584 *********** 2025-07-04 17:46:44.382996 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:44.405978 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:44.436588 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:44.460822 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:44.526272 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:46:44.527558 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:46:44.528633 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:46:44.530132 | orchestrator | 2025-07-04 17:46:44.530781 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-04 17:46:44.532173 | orchestrator | Friday 04 July 2025 17:46:44 +0000 (0:00:00.229) 0:00:19.814 *********** 2025-07-04 17:46:44.607005 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:44.633077 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:44.662097 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:44.687716 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:44.763454 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:46:44.764595 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:46:44.765554 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:46:44.766483 | orchestrator | 2025-07-04 17:46:44.767601 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-04 17:46:44.768478 | orchestrator | Friday 04 July 2025 17:46:44 +0000 (0:00:00.238) 0:00:20.052 *********** 2025-07-04 17:46:44.842343 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:44.871795 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:44.898561 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:44.930127 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:45.000875 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:46:45.002010 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:46:45.003323 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:46:45.004325 | orchestrator | 2025-07-04 17:46:45.005064 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-04 17:46:45.005430 | orchestrator | Friday 04 July 2025 17:46:44 +0000 (0:00:00.237) 0:00:20.290 *********** 2025-07-04 17:46:45.324671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:46:45.324834 | orchestrator | 2025-07-04 17:46:45.326381 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-04 17:46:45.328059 | orchestrator | Friday 04 July 2025 17:46:45 +0000 (0:00:00.322) 0:00:20.612 *********** 2025-07-04 17:46:45.910765 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:45.910995 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:45.911249 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:45.912611 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:45.913675 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:46:45.914158 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:46:45.915263 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:46:45.915852 | orchestrator | 2025-07-04 17:46:45.916221 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-04 17:46:45.917327 | orchestrator | Friday 04 July 2025 17:46:45 +0000 (0:00:00.585) 0:00:21.197 *********** 2025-07-04 17:46:45.989121 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:46:46.017052 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:46:46.056671 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:46:46.083464 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:46:46.155804 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:46:46.157151 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:46:46.158473 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:46:46.159862 | orchestrator | 2025-07-04 17:46:46.160628 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-04 17:46:46.162431 | orchestrator | Friday 04 July 2025 17:46:46 +0000 (0:00:00.247) 0:00:21.444 *********** 2025-07-04 17:46:47.231562 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:47.231664 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:47.232212 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:47.232692 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:47.233599 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:46:47.233819 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:46:47.234797 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:46:47.234882 | orchestrator | 2025-07-04 17:46:47.235747 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-04 17:46:47.235817 | orchestrator | Friday 04 July 2025 17:46:47 +0000 (0:00:01.074) 0:00:22.519 *********** 2025-07-04 17:46:47.793543 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:47.793723 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:47.795108 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:47.796133 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:47.796748 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:46:47.797971 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:46:47.798445 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:46:47.799247 | orchestrator | 2025-07-04 17:46:47.800000 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-04 17:46:47.801238 | orchestrator | Friday 04 July 2025 17:46:47 +0000 (0:00:00.561) 0:00:23.080 *********** 2025-07-04 17:46:48.905489 | orchestrator | ok: [testbed-manager] 2025-07-04 17:46:48.905700 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:46:48.907409 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:46:48.908565 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:46:48.909544 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:46:48.910413 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:46:48.910840 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:46:48.911782 | orchestrator | 2025-07-04 17:46:48.912314 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-04 17:46:48.912789 | orchestrator | Friday 04 July 2025 17:46:48 +0000 (0:00:01.111) 0:00:24.192 *********** 2025-07-04 17:47:02.890469 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:02.890587 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:02.891642 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:02.891983 | orchestrator | changed: [testbed-manager] 2025-07-04 17:47:02.894274 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:47:02.895044 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:47:02.896317 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:47:02.897853 | orchestrator | 2025-07-04 17:47:02.900618 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-07-04 17:47:02.902354 | orchestrator | Friday 04 July 2025 17:47:02 +0000 (0:00:13.983) 0:00:38.176 *********** 2025-07-04 17:47:02.975046 | orchestrator | ok: [testbed-manager] 2025-07-04 17:47:03.003662 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:03.034834 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:03.059602 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:03.115686 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:47:03.116617 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:47:03.120525 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:47:03.120575 | orchestrator | 2025-07-04 17:47:03.120590 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-07-04 17:47:03.120603 | orchestrator | Friday 04 July 2025 17:47:03 +0000 (0:00:00.228) 0:00:38.404 *********** 2025-07-04 17:47:03.198295 | orchestrator | ok: [testbed-manager] 2025-07-04 17:47:03.228818 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:03.254502 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:03.295070 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:03.395207 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:47:03.396149 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:47:03.398365 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:47:03.399254 | orchestrator | 2025-07-04 17:47:03.400428 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-07-04 17:47:03.401419 | orchestrator | Friday 04 July 2025 17:47:03 +0000 (0:00:00.278) 0:00:38.682 *********** 2025-07-04 17:47:03.499913 | orchestrator | ok: [testbed-manager] 2025-07-04 17:47:03.525465 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:03.558123 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:03.579973 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:03.648126 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:47:03.649980 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:47:03.651352 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:47:03.653312 | orchestrator | 2025-07-04 17:47:03.654140 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-07-04 17:47:03.654996 | orchestrator | Friday 04 July 2025 17:47:03 +0000 (0:00:00.252) 0:00:38.935 *********** 2025-07-04 17:47:03.964555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:47:03.965203 | orchestrator | 2025-07-04 17:47:03.966292 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-07-04 17:47:03.967525 | orchestrator | Friday 04 July 2025 17:47:03 +0000 (0:00:00.316) 0:00:39.251 *********** 2025-07-04 17:47:05.698671 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:05.698791 | orchestrator | ok: [testbed-manager] 2025-07-04 17:47:05.698871 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:05.698886 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:47:05.699292 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:47:05.700870 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:05.700916 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:47:05.701389 | orchestrator | 2025-07-04 17:47:05.701888 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-07-04 17:47:05.702127 | orchestrator | Friday 04 July 2025 17:47:05 +0000 (0:00:01.735) 0:00:40.986 *********** 2025-07-04 17:47:06.833046 | orchestrator | changed: [testbed-manager] 2025-07-04 17:47:06.835322 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:47:06.835353 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:47:06.835456 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:47:06.836064 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:47:06.836909 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:47:06.837554 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:47:06.838198 | orchestrator | 2025-07-04 17:47:06.838663 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-07-04 17:47:06.839486 | orchestrator | Friday 04 July 2025 17:47:06 +0000 (0:00:01.133) 0:00:42.119 *********** 2025-07-04 17:47:07.676161 | orchestrator | ok: [testbed-manager] 2025-07-04 17:47:07.677267 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:07.678219 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:07.679220 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:07.679724 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:47:07.680707 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:47:07.680943 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:47:07.681662 | orchestrator | 2025-07-04 17:47:07.682288 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-07-04 17:47:07.683089 | orchestrator | Friday 04 July 2025 17:47:07 +0000 (0:00:00.843) 0:00:42.963 *********** 2025-07-04 17:47:08.016239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:47:08.016640 | orchestrator | 2025-07-04 17:47:08.017335 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-07-04 17:47:08.020846 | orchestrator | Friday 04 July 2025 17:47:08 +0000 (0:00:00.339) 0:00:43.303 *********** 2025-07-04 17:47:09.178364 | orchestrator | changed: [testbed-manager] 2025-07-04 17:47:09.181692 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:47:09.181767 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:47:09.183607 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:47:09.184583 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:47:09.185907 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:47:09.186519 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:47:09.188173 | orchestrator | 2025-07-04 17:47:09.189090 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-07-04 17:47:09.190101 | orchestrator | Friday 04 July 2025 17:47:09 +0000 (0:00:01.160) 0:00:44.463 *********** 2025-07-04 17:47:09.294326 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:47:09.327871 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:47:09.359668 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:47:09.504595 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:47:09.506503 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:47:09.507026 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:47:09.508496 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:47:09.509508 | orchestrator | 2025-07-04 17:47:09.510555 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-07-04 17:47:09.511146 | orchestrator | Friday 04 July 2025 17:47:09 +0000 (0:00:00.329) 0:00:44.792 *********** 2025-07-04 17:47:21.438383 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:47:21.438506 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:47:21.438522 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:47:21.438605 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:47:21.438620 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:47:21.438662 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:47:21.439672 | orchestrator | changed: [testbed-manager] 2025-07-04 17:47:21.440151 | orchestrator | 2025-07-04 17:47:21.441419 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-07-04 17:47:21.442136 | orchestrator | Friday 04 July 2025 17:47:21 +0000 (0:00:11.928) 0:00:56.720 *********** 2025-07-04 17:47:22.667345 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:47:22.667478 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:47:22.668521 | orchestrator | ok: [testbed-manager] 2025-07-04 17:47:22.668806 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:22.669761 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:22.670358 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:22.673529 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:47:22.673585 | orchestrator | 2025-07-04 17:47:22.673622 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-07-04 17:47:22.675015 | orchestrator | Friday 04 July 2025 17:47:22 +0000 (0:00:01.233) 0:00:57.954 *********** 2025-07-04 17:47:23.571593 | orchestrator | ok: [testbed-manager] 2025-07-04 17:47:23.572347 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:23.574187 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:23.575230 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:47:23.576152 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:23.577493 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:47:23.577844 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:47:23.579020 | orchestrator | 2025-07-04 17:47:23.579355 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-07-04 17:47:23.580364 | orchestrator | Friday 04 July 2025 17:47:23 +0000 (0:00:00.903) 0:00:58.858 *********** 2025-07-04 17:47:23.666815 | orchestrator | ok: [testbed-manager] 2025-07-04 17:47:23.700159 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:23.725023 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:23.758691 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:23.833608 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:47:23.833713 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:47:23.834141 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:47:23.835165 | orchestrator | 2025-07-04 17:47:23.836233 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-07-04 17:47:23.837488 | orchestrator | Friday 04 July 2025 17:47:23 +0000 (0:00:00.263) 0:00:59.122 *********** 2025-07-04 17:47:23.914545 | orchestrator | ok: [testbed-manager] 2025-07-04 17:47:23.939227 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:23.976575 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:24.008328 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:24.071133 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:47:24.071302 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:47:24.072206 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:47:24.073169 | orchestrator | 2025-07-04 17:47:24.074784 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-07-04 17:47:24.075742 | orchestrator | Friday 04 July 2025 17:47:24 +0000 (0:00:00.237) 0:00:59.359 *********** 2025-07-04 17:47:24.368648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:47:24.369464 | orchestrator | 2025-07-04 17:47:24.372768 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-07-04 17:47:24.372827 | orchestrator | Friday 04 July 2025 17:47:24 +0000 (0:00:00.296) 0:00:59.656 *********** 2025-07-04 17:47:25.930806 | orchestrator | ok: [testbed-manager] 2025-07-04 17:47:25.931022 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:25.931295 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:47:25.931608 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:25.932484 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:47:25.932977 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:25.933175 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:47:25.934735 | orchestrator | 2025-07-04 17:47:25.935089 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-07-04 17:47:25.935345 | orchestrator | Friday 04 July 2025 17:47:25 +0000 (0:00:01.561) 0:01:01.217 *********** 2025-07-04 17:47:26.534532 | orchestrator | changed: [testbed-manager] 2025-07-04 17:47:26.534630 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:47:26.534980 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:47:26.536128 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:47:26.537266 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:47:26.537878 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:47:26.538906 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:47:26.539431 | orchestrator | 2025-07-04 17:47:26.540285 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-07-04 17:47:26.540799 | orchestrator | Friday 04 July 2025 17:47:26 +0000 (0:00:00.604) 0:01:01.822 *********** 2025-07-04 17:47:26.632430 | orchestrator | ok: [testbed-manager] 2025-07-04 17:47:26.664767 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:26.693841 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:26.722255 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:26.798786 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:47:26.799378 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:47:26.800667 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:47:26.801900 | orchestrator | 2025-07-04 17:47:26.803276 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-07-04 17:47:26.804357 | orchestrator | Friday 04 July 2025 17:47:26 +0000 (0:00:00.264) 0:01:02.086 *********** 2025-07-04 17:47:27.957301 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:27.958259 | orchestrator | ok: [testbed-manager] 2025-07-04 17:47:27.960023 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:27.960794 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:47:27.961888 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:47:27.962826 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:27.964459 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:47:27.965969 | orchestrator | 2025-07-04 17:47:27.967591 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-07-04 17:47:27.968835 | orchestrator | Friday 04 July 2025 17:47:27 +0000 (0:00:01.156) 0:01:03.243 *********** 2025-07-04 17:47:29.864176 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:47:29.865861 | orchestrator | ok: [testbed-manager] 2025-07-04 17:47:29.866502 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:47:29.867537 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:47:29.869174 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:47:29.870192 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:47:29.871488 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:47:29.872062 | orchestrator | 2025-07-04 17:47:29.873038 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-07-04 17:47:29.873996 | orchestrator | Friday 04 July 2025 17:47:29 +0000 (0:00:01.907) 0:01:05.150 *********** 2025-07-04 17:47:36.690255 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:47:36.690398 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:47:36.691258 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:47:36.691821 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:47:36.694623 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:47:36.695041 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:47:36.696375 | orchestrator | changed: [testbed-manager] 2025-07-04 17:47:36.697523 | orchestrator | 2025-07-04 17:47:36.698266 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-07-04 17:47:36.699083 | orchestrator | Friday 04 July 2025 17:47:36 +0000 (0:00:06.825) 0:01:11.976 *********** 2025-07-04 17:48:16.300385 | orchestrator | ok: [testbed-manager] 2025-07-04 17:48:16.301192 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:48:16.301223 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:48:16.301242 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:48:16.302571 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:48:16.302630 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:48:16.302637 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:48:16.302735 | orchestrator | 2025-07-04 17:48:16.303095 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-07-04 17:48:16.303642 | orchestrator | Friday 04 July 2025 17:48:16 +0000 (0:00:39.609) 0:01:51.585 *********** 2025-07-04 17:49:32.787971 | orchestrator | changed: [testbed-manager] 2025-07-04 17:49:32.788093 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:49:32.789776 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:49:32.790703 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:49:32.792066 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:49:32.792869 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:49:32.794341 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:49:32.795063 | orchestrator | 2025-07-04 17:49:32.796025 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-07-04 17:49:32.796234 | orchestrator | Friday 04 July 2025 17:49:32 +0000 (0:01:16.485) 0:03:08.070 *********** 2025-07-04 17:49:34.508248 | orchestrator | ok: [testbed-manager] 2025-07-04 17:49:34.508417 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:49:34.510274 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:49:34.511249 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:49:34.511803 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:49:34.512637 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:49:34.513607 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:49:34.513986 | orchestrator | 2025-07-04 17:49:34.514753 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-07-04 17:49:34.515405 | orchestrator | Friday 04 July 2025 17:49:34 +0000 (0:00:01.723) 0:03:09.794 *********** 2025-07-04 17:49:48.873308 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:49:48.873488 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:49:48.873503 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:49:48.873509 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:49:48.873515 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:49:48.873521 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:49:48.874537 | orchestrator | changed: [testbed-manager] 2025-07-04 17:49:48.875237 | orchestrator | 2025-07-04 17:49:48.876329 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-07-04 17:49:48.877661 | orchestrator | Friday 04 July 2025 17:49:48 +0000 (0:00:14.359) 0:03:24.153 *********** 2025-07-04 17:49:49.314317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-07-04 17:49:49.314445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-07-04 17:49:49.315047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-07-04 17:49:49.315535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-07-04 17:49:49.316238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-07-04 17:49:49.316972 | orchestrator | 2025-07-04 17:49:49.317762 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-07-04 17:49:49.318404 | orchestrator | Friday 04 July 2025 17:49:49 +0000 (0:00:00.448) 0:03:24.601 *********** 2025-07-04 17:49:49.352344 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-04 17:49:49.386936 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:49:49.434287 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-04 17:49:49.434441 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-04 17:49:49.464458 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:49:49.507282 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:49:49.508640 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-04 17:49:49.538798 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:49:50.061138 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-04 17:49:50.061785 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-04 17:49:50.063515 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-04 17:49:50.064151 | orchestrator | 2025-07-04 17:49:50.065071 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-07-04 17:49:50.066012 | orchestrator | Friday 04 July 2025 17:49:50 +0000 (0:00:00.746) 0:03:25.348 *********** 2025-07-04 17:49:50.121757 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-04 17:49:50.123213 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-04 17:49:50.123298 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-04 17:49:50.124026 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-04 17:49:50.184845 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-04 17:49:50.185183 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-04 17:49:50.185547 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-04 17:49:50.186158 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-04 17:49:50.186537 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-04 17:49:50.187654 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-04 17:49:50.188628 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-04 17:49:50.188795 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-04 17:49:50.189179 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-04 17:49:50.189660 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-04 17:49:50.190191 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-04 17:49:50.193761 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-04 17:49:50.195324 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-04 17:49:50.195374 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-04 17:49:50.195396 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-04 17:49:50.195415 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-04 17:49:50.197776 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-04 17:49:50.202368 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-04 17:49:50.203257 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-04 17:49:50.206529 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-04 17:49:50.207274 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-04 17:49:50.207783 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-04 17:49:50.208562 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-04 17:49:50.209979 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-04 17:49:50.228430 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-04 17:49:50.228658 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:49:50.229254 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-04 17:49:50.229686 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-04 17:49:50.268483 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:49:50.268588 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-04 17:49:50.268604 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-04 17:49:50.268615 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-04 17:49:50.268646 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-04 17:49:50.268657 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-04 17:49:50.268669 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-04 17:49:50.301063 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-04 17:49:50.301257 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:49:50.301760 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-04 17:49:50.302190 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-04 17:49:53.903445 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:49:53.904446 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-04 17:49:53.906462 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-04 17:49:53.907557 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-04 17:49:53.908360 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-04 17:49:53.909556 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-04 17:49:53.910965 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-04 17:49:53.913046 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-04 17:49:53.913846 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-04 17:49:53.914662 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-04 17:49:53.915047 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-04 17:49:53.916046 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-04 17:49:53.916250 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-04 17:49:53.916651 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-04 17:49:53.917549 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-04 17:49:53.917926 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-04 17:49:53.919264 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-04 17:49:53.919612 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-04 17:49:53.920110 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-04 17:49:53.920354 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-04 17:49:53.921308 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-04 17:49:53.922073 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-04 17:49:53.922965 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-04 17:49:53.923073 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-04 17:49:53.924018 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-04 17:49:53.924395 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-04 17:49:53.924869 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-04 17:49:53.925503 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-04 17:49:53.925967 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-04 17:49:53.926555 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-04 17:49:53.926942 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-04 17:49:53.927404 | orchestrator | 2025-07-04 17:49:53.927766 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-07-04 17:49:53.928205 | orchestrator | Friday 04 July 2025 17:49:53 +0000 (0:00:03.841) 0:03:29.190 *********** 2025-07-04 17:49:54.562758 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-04 17:49:54.562885 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-04 17:49:54.562961 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-04 17:49:54.563052 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-04 17:49:54.564070 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-04 17:49:54.565631 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-04 17:49:54.567144 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-04 17:49:54.568774 | orchestrator | 2025-07-04 17:49:54.569883 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-07-04 17:49:54.570973 | orchestrator | Friday 04 July 2025 17:49:54 +0000 (0:00:00.657) 0:03:29.847 *********** 2025-07-04 17:49:54.630186 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-04 17:49:54.661993 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:49:54.745869 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-04 17:49:55.099094 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-04 17:49:55.100425 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:49:55.101047 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:49:55.102977 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-04 17:49:55.104585 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:49:55.105996 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-04 17:49:55.107180 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-04 17:49:55.108278 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-04 17:49:55.109175 | orchestrator | 2025-07-04 17:49:55.109462 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-07-04 17:49:55.109951 | orchestrator | Friday 04 July 2025 17:49:55 +0000 (0:00:00.538) 0:03:30.385 *********** 2025-07-04 17:49:55.140869 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-04 17:49:55.174592 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:49:55.294710 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-04 17:49:55.294932 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-04 17:49:55.697135 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:49:55.697876 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:49:55.698721 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-04 17:49:55.699194 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:49:55.703340 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-04 17:49:55.703528 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-04 17:49:55.703555 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-04 17:49:55.703567 | orchestrator | 2025-07-04 17:49:55.703581 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-07-04 17:49:55.703593 | orchestrator | Friday 04 July 2025 17:49:55 +0000 (0:00:00.600) 0:03:30.985 *********** 2025-07-04 17:49:55.756667 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:49:55.814488 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:49:55.841824 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:49:55.872670 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:49:56.018368 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:49:56.021459 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:49:56.021518 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:49:56.021578 | orchestrator | 2025-07-04 17:49:56.022867 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-07-04 17:49:56.023617 | orchestrator | Friday 04 July 2025 17:49:56 +0000 (0:00:00.319) 0:03:31.305 *********** 2025-07-04 17:50:01.810410 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:50:01.811496 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:50:01.812327 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:50:01.813237 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:50:01.814285 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:50:01.814348 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:50:01.815234 | orchestrator | ok: [testbed-manager] 2025-07-04 17:50:01.815567 | orchestrator | 2025-07-04 17:50:01.816377 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-07-04 17:50:01.817254 | orchestrator | Friday 04 July 2025 17:50:01 +0000 (0:00:05.793) 0:03:37.098 *********** 2025-07-04 17:50:01.902159 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-07-04 17:50:01.902842 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-07-04 17:50:01.940750 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:50:01.984064 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:50:01.986166 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-07-04 17:50:02.030500 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-07-04 17:50:02.030734 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:50:02.032164 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-07-04 17:50:02.074266 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:50:02.137384 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-07-04 17:50:02.137772 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:50:02.138950 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:50:02.141303 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-07-04 17:50:02.141682 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:50:02.142271 | orchestrator | 2025-07-04 17:50:02.143829 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-07-04 17:50:02.144194 | orchestrator | Friday 04 July 2025 17:50:02 +0000 (0:00:00.327) 0:03:37.426 *********** 2025-07-04 17:50:03.225813 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-07-04 17:50:03.226133 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-07-04 17:50:03.230097 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-07-04 17:50:03.230199 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-07-04 17:50:03.230223 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-07-04 17:50:03.230243 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-07-04 17:50:03.231681 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-07-04 17:50:03.231730 | orchestrator | 2025-07-04 17:50:03.233493 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-07-04 17:50:03.234859 | orchestrator | Friday 04 July 2025 17:50:03 +0000 (0:00:01.086) 0:03:38.512 *********** 2025-07-04 17:50:03.761228 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:50:03.761594 | orchestrator | 2025-07-04 17:50:03.762074 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-07-04 17:50:03.769170 | orchestrator | Friday 04 July 2025 17:50:03 +0000 (0:00:00.536) 0:03:39.049 *********** 2025-07-04 17:50:05.051396 | orchestrator | ok: [testbed-manager] 2025-07-04 17:50:05.051489 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:50:05.051502 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:50:05.052536 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:50:05.053631 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:50:05.054565 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:50:05.055469 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:50:05.056416 | orchestrator | 2025-07-04 17:50:05.057373 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-07-04 17:50:05.059083 | orchestrator | Friday 04 July 2025 17:50:05 +0000 (0:00:01.287) 0:03:40.336 *********** 2025-07-04 17:50:05.712381 | orchestrator | ok: [testbed-manager] 2025-07-04 17:50:05.712468 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:50:05.713482 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:50:05.715078 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:50:05.716130 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:50:05.716697 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:50:05.717611 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:50:05.717626 | orchestrator | 2025-07-04 17:50:05.718140 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-07-04 17:50:05.718554 | orchestrator | Friday 04 July 2025 17:50:05 +0000 (0:00:00.662) 0:03:40.998 *********** 2025-07-04 17:50:06.376570 | orchestrator | changed: [testbed-manager] 2025-07-04 17:50:06.376746 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:50:06.377674 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:50:06.380298 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:50:06.381029 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:50:06.381833 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:50:06.382947 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:50:06.383787 | orchestrator | 2025-07-04 17:50:06.384736 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-07-04 17:50:06.385633 | orchestrator | Friday 04 July 2025 17:50:06 +0000 (0:00:00.665) 0:03:41.664 *********** 2025-07-04 17:50:07.054415 | orchestrator | ok: [testbed-manager] 2025-07-04 17:50:07.054938 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:50:07.055336 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:50:07.056585 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:50:07.057350 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:50:07.058148 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:50:07.058615 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:50:07.059775 | orchestrator | 2025-07-04 17:50:07.060846 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-07-04 17:50:07.062644 | orchestrator | Friday 04 July 2025 17:50:07 +0000 (0:00:00.675) 0:03:42.340 *********** 2025-07-04 17:50:08.018531 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751650001.2119486, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.019107 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751650069.0934563, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.019398 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751650094.733231, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.020087 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751650065.513602, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.020801 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751650075.422298, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.023133 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751650065.6825159, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.025072 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751650076.6189275, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.025795 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751650082.499456, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.027179 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751649969.1786542, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.028441 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751649971.3010714, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.029788 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751649963.521401, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.031279 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751649984.4354477, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.032852 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751649966.0148396, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.033276 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751649970.454658, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 17:50:08.034400 | orchestrator | 2025-07-04 17:50:08.035390 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-07-04 17:50:08.036285 | orchestrator | Friday 04 July 2025 17:50:08 +0000 (0:00:00.966) 0:03:43.306 *********** 2025-07-04 17:50:09.285644 | orchestrator | changed: [testbed-manager] 2025-07-04 17:50:09.286736 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:50:09.288510 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:50:09.290746 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:50:09.290793 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:50:09.291152 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:50:09.291920 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:50:09.293934 | orchestrator | 2025-07-04 17:50:09.294571 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-07-04 17:50:09.295255 | orchestrator | Friday 04 July 2025 17:50:09 +0000 (0:00:01.266) 0:03:44.572 *********** 2025-07-04 17:50:10.476142 | orchestrator | changed: [testbed-manager] 2025-07-04 17:50:10.476374 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:50:10.477639 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:50:10.478507 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:50:10.479236 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:50:10.480075 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:50:10.481235 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:50:10.482170 | orchestrator | 2025-07-04 17:50:10.483028 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-07-04 17:50:10.483550 | orchestrator | Friday 04 July 2025 17:50:10 +0000 (0:00:01.189) 0:03:45.762 *********** 2025-07-04 17:50:11.654542 | orchestrator | changed: [testbed-manager] 2025-07-04 17:50:11.654644 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:50:11.654659 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:50:11.654735 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:50:11.655995 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:50:11.657653 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:50:11.657951 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:50:11.659073 | orchestrator | 2025-07-04 17:50:11.659889 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-07-04 17:50:11.660841 | orchestrator | Friday 04 July 2025 17:50:11 +0000 (0:00:01.173) 0:03:46.936 *********** 2025-07-04 17:50:11.753884 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:50:11.802114 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:50:11.838458 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:50:11.883837 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:50:11.950532 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:50:11.951067 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:50:11.954323 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:50:11.954611 | orchestrator | 2025-07-04 17:50:11.955592 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-07-04 17:50:11.956325 | orchestrator | Friday 04 July 2025 17:50:11 +0000 (0:00:00.302) 0:03:47.238 *********** 2025-07-04 17:50:12.666196 | orchestrator | ok: [testbed-manager] 2025-07-04 17:50:12.666520 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:50:12.666560 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:50:12.666891 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:50:12.667885 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:50:12.668098 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:50:12.669377 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:50:12.670227 | orchestrator | 2025-07-04 17:50:12.670632 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-07-04 17:50:12.671773 | orchestrator | Friday 04 July 2025 17:50:12 +0000 (0:00:00.714) 0:03:47.953 *********** 2025-07-04 17:50:13.071760 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:50:13.071841 | orchestrator | 2025-07-04 17:50:13.074376 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-07-04 17:50:13.074794 | orchestrator | Friday 04 July 2025 17:50:13 +0000 (0:00:00.404) 0:03:48.358 *********** 2025-07-04 17:50:21.309178 | orchestrator | ok: [testbed-manager] 2025-07-04 17:50:21.309886 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:50:21.310169 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:50:21.311313 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:50:21.313030 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:50:21.315332 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:50:21.316888 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:50:21.317411 | orchestrator | 2025-07-04 17:50:21.318589 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-07-04 17:50:21.319480 | orchestrator | Friday 04 July 2025 17:50:21 +0000 (0:00:08.237) 0:03:56.595 *********** 2025-07-04 17:50:22.581135 | orchestrator | ok: [testbed-manager] 2025-07-04 17:50:22.581353 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:50:22.581376 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:50:22.581486 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:50:22.587312 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:50:22.587500 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:50:22.587558 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:50:22.587572 | orchestrator | 2025-07-04 17:50:22.587587 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-07-04 17:50:22.587600 | orchestrator | Friday 04 July 2025 17:50:22 +0000 (0:00:01.271) 0:03:57.866 *********** 2025-07-04 17:50:24.704035 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:50:24.705397 | orchestrator | ok: [testbed-manager] 2025-07-04 17:50:24.708425 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:50:24.710107 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:50:24.711251 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:50:24.711868 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:50:24.712775 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:50:24.714639 | orchestrator | 2025-07-04 17:50:24.714667 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-07-04 17:50:24.715450 | orchestrator | Friday 04 July 2025 17:50:24 +0000 (0:00:02.121) 0:03:59.988 *********** 2025-07-04 17:50:25.210981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:50:25.214383 | orchestrator | 2025-07-04 17:50:25.214441 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-07-04 17:50:25.214455 | orchestrator | Friday 04 July 2025 17:50:25 +0000 (0:00:00.509) 0:04:00.498 *********** 2025-07-04 17:50:33.373311 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:50:33.373431 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:50:33.373446 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:50:33.373985 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:50:33.374997 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:50:33.377038 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:50:33.377399 | orchestrator | changed: [testbed-manager] 2025-07-04 17:50:33.377978 | orchestrator | 2025-07-04 17:50:33.378815 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-07-04 17:50:33.379453 | orchestrator | Friday 04 July 2025 17:50:33 +0000 (0:00:08.161) 0:04:08.659 *********** 2025-07-04 17:50:34.070187 | orchestrator | changed: [testbed-manager] 2025-07-04 17:50:34.071331 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:50:34.072707 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:50:34.073158 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:50:34.073834 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:50:34.074495 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:50:34.076308 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:50:34.076620 | orchestrator | 2025-07-04 17:50:34.077205 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-07-04 17:50:34.077599 | orchestrator | Friday 04 July 2025 17:50:34 +0000 (0:00:00.698) 0:04:09.357 *********** 2025-07-04 17:50:35.243127 | orchestrator | changed: [testbed-manager] 2025-07-04 17:50:35.246199 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:50:35.248543 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:50:35.248584 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:50:35.249407 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:50:35.250364 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:50:35.251682 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:50:35.254105 | orchestrator | 2025-07-04 17:50:35.254205 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-07-04 17:50:35.254232 | orchestrator | Friday 04 July 2025 17:50:35 +0000 (0:00:01.171) 0:04:10.529 *********** 2025-07-04 17:50:36.327265 | orchestrator | changed: [testbed-manager] 2025-07-04 17:50:36.329683 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:50:36.329794 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:50:36.330744 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:50:36.331234 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:50:36.331967 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:50:36.332723 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:50:36.333373 | orchestrator | 2025-07-04 17:50:36.333888 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-07-04 17:50:36.334608 | orchestrator | Friday 04 July 2025 17:50:36 +0000 (0:00:01.084) 0:04:11.613 *********** 2025-07-04 17:50:36.437134 | orchestrator | ok: [testbed-manager] 2025-07-04 17:50:36.467678 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:50:36.501942 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:50:36.541386 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:50:36.624973 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:50:36.625575 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:50:36.627491 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:50:36.628035 | orchestrator | 2025-07-04 17:50:36.628713 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-07-04 17:50:36.629328 | orchestrator | Friday 04 July 2025 17:50:36 +0000 (0:00:00.299) 0:04:11.913 *********** 2025-07-04 17:50:36.736285 | orchestrator | ok: [testbed-manager] 2025-07-04 17:50:36.777693 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:50:36.817572 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:50:36.851275 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:50:36.951768 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:50:36.952846 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:50:36.954091 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:50:36.955060 | orchestrator | 2025-07-04 17:50:36.956144 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-07-04 17:50:36.957415 | orchestrator | Friday 04 July 2025 17:50:36 +0000 (0:00:00.327) 0:04:12.240 *********** 2025-07-04 17:50:37.071147 | orchestrator | ok: [testbed-manager] 2025-07-04 17:50:37.104700 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:50:37.139485 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:50:37.175958 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:50:37.261202 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:50:37.262127 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:50:37.263435 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:50:37.264450 | orchestrator | 2025-07-04 17:50:37.265266 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-07-04 17:50:37.266632 | orchestrator | Friday 04 July 2025 17:50:37 +0000 (0:00:00.309) 0:04:12.549 *********** 2025-07-04 17:50:42.878665 | orchestrator | ok: [testbed-manager] 2025-07-04 17:50:42.879529 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:50:42.880069 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:50:42.881156 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:50:42.885315 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:50:42.885349 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:50:42.885354 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:50:42.885358 | orchestrator | 2025-07-04 17:50:42.885364 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-07-04 17:50:42.885369 | orchestrator | Friday 04 July 2025 17:50:42 +0000 (0:00:05.614) 0:04:18.164 *********** 2025-07-04 17:50:43.336092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:50:43.341381 | orchestrator | 2025-07-04 17:50:43.341812 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-07-04 17:50:43.342952 | orchestrator | Friday 04 July 2025 17:50:43 +0000 (0:00:00.459) 0:04:18.623 *********** 2025-07-04 17:50:43.407622 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-07-04 17:50:43.453391 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-07-04 17:50:43.453865 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-07-04 17:50:43.454649 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-07-04 17:50:43.456250 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-07-04 17:50:43.513731 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-07-04 17:50:43.514383 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:50:43.514780 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-07-04 17:50:43.515635 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-07-04 17:50:43.556712 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:50:43.556807 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-07-04 17:50:43.559341 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-07-04 17:50:43.591941 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:50:43.593060 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-07-04 17:50:43.659339 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:50:43.663983 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-07-04 17:50:43.664032 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:50:43.664046 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:50:43.664743 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-07-04 17:50:43.666645 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-07-04 17:50:43.667545 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:50:43.668464 | orchestrator | 2025-07-04 17:50:43.670341 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-07-04 17:50:43.670380 | orchestrator | Friday 04 July 2025 17:50:43 +0000 (0:00:00.322) 0:04:18.946 *********** 2025-07-04 17:50:44.097672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:50:44.101755 | orchestrator | 2025-07-04 17:50:44.101848 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-07-04 17:50:44.104409 | orchestrator | Friday 04 July 2025 17:50:44 +0000 (0:00:00.436) 0:04:19.383 *********** 2025-07-04 17:50:44.179878 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-07-04 17:50:44.223179 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:50:44.224248 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-07-04 17:50:44.224970 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-07-04 17:50:44.258366 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:50:44.299272 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:50:44.302670 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-07-04 17:50:44.302787 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-07-04 17:50:44.334742 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:50:44.432444 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:50:44.434279 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-07-04 17:50:44.438455 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:50:44.438518 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-07-04 17:50:44.438525 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:50:44.438529 | orchestrator | 2025-07-04 17:50:44.438561 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-07-04 17:50:44.439812 | orchestrator | Friday 04 July 2025 17:50:44 +0000 (0:00:00.337) 0:04:19.720 *********** 2025-07-04 17:50:44.982867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:50:44.987661 | orchestrator | 2025-07-04 17:50:44.989786 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-07-04 17:50:44.991225 | orchestrator | Friday 04 July 2025 17:50:44 +0000 (0:00:00.549) 0:04:20.269 *********** 2025-07-04 17:51:19.245585 | orchestrator | changed: [testbed-manager] 2025-07-04 17:51:19.245776 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:51:19.245798 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:51:19.248012 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:51:19.249350 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:51:19.250443 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:51:19.251155 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:51:19.252489 | orchestrator | 2025-07-04 17:51:19.252684 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-07-04 17:51:19.253798 | orchestrator | Friday 04 July 2025 17:51:19 +0000 (0:00:34.259) 0:04:54.529 *********** 2025-07-04 17:51:27.176543 | orchestrator | changed: [testbed-manager] 2025-07-04 17:51:27.178542 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:51:27.179683 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:51:27.180746 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:51:27.183076 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:51:27.183786 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:51:27.185134 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:51:27.186338 | orchestrator | 2025-07-04 17:51:27.186796 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-07-04 17:51:27.187877 | orchestrator | Friday 04 July 2025 17:51:27 +0000 (0:00:07.932) 0:05:02.462 *********** 2025-07-04 17:51:34.950145 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:51:34.950231 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:51:34.950634 | orchestrator | changed: [testbed-manager] 2025-07-04 17:51:34.952130 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:51:34.953862 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:51:34.955253 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:51:34.955931 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:51:34.956850 | orchestrator | 2025-07-04 17:51:34.958056 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-07-04 17:51:34.958537 | orchestrator | Friday 04 July 2025 17:51:34 +0000 (0:00:07.776) 0:05:10.238 *********** 2025-07-04 17:51:36.601744 | orchestrator | ok: [testbed-manager] 2025-07-04 17:51:36.602414 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:51:36.604528 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:51:36.605477 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:51:36.606356 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:51:36.608593 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:51:36.608625 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:51:36.612021 | orchestrator | 2025-07-04 17:51:36.612944 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-07-04 17:51:36.613219 | orchestrator | Friday 04 July 2025 17:51:36 +0000 (0:00:01.650) 0:05:11.888 *********** 2025-07-04 17:51:43.103986 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:51:43.104597 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:51:43.105037 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:51:43.107226 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:51:43.107263 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:51:43.107571 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:51:43.108221 | orchestrator | changed: [testbed-manager] 2025-07-04 17:51:43.108863 | orchestrator | 2025-07-04 17:51:43.109441 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-07-04 17:51:43.110454 | orchestrator | Friday 04 July 2025 17:51:43 +0000 (0:00:06.502) 0:05:18.391 *********** 2025-07-04 17:51:43.537876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:51:43.541266 | orchestrator | 2025-07-04 17:51:43.541326 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-07-04 17:51:43.541341 | orchestrator | Friday 04 July 2025 17:51:43 +0000 (0:00:00.432) 0:05:18.824 *********** 2025-07-04 17:51:44.247758 | orchestrator | changed: [testbed-manager] 2025-07-04 17:51:44.247848 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:51:44.249126 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:51:44.250099 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:51:44.251352 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:51:44.252277 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:51:44.253576 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:51:44.254444 | orchestrator | 2025-07-04 17:51:44.255823 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-07-04 17:51:44.256289 | orchestrator | Friday 04 July 2025 17:51:44 +0000 (0:00:00.711) 0:05:19.535 *********** 2025-07-04 17:51:45.911381 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:51:45.911476 | orchestrator | ok: [testbed-manager] 2025-07-04 17:51:45.912672 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:51:45.913956 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:51:45.915001 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:51:45.916265 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:51:45.920023 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:51:45.924603 | orchestrator | 2025-07-04 17:51:45.924651 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-07-04 17:51:45.924675 | orchestrator | Friday 04 July 2025 17:51:45 +0000 (0:00:01.657) 0:05:21.193 *********** 2025-07-04 17:51:46.714544 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:51:46.714780 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:51:46.715201 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:51:46.716208 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:51:46.716256 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:51:46.716274 | orchestrator | changed: [testbed-manager] 2025-07-04 17:51:46.718482 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:51:46.718602 | orchestrator | 2025-07-04 17:51:46.718699 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-07-04 17:51:46.719030 | orchestrator | Friday 04 July 2025 17:51:46 +0000 (0:00:00.808) 0:05:22.001 *********** 2025-07-04 17:51:46.820886 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:51:46.879998 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:51:46.919313 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:51:46.957147 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:51:47.030354 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:51:47.031398 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:51:47.032389 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:51:47.035329 | orchestrator | 2025-07-04 17:51:47.035364 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-07-04 17:51:47.035372 | orchestrator | Friday 04 July 2025 17:51:47 +0000 (0:00:00.317) 0:05:22.318 *********** 2025-07-04 17:51:47.151423 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:51:47.195546 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:51:47.231818 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:51:47.265926 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:51:47.452453 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:51:47.456600 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:51:47.456993 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:51:47.458210 | orchestrator | 2025-07-04 17:51:47.459438 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-07-04 17:51:47.460548 | orchestrator | Friday 04 July 2025 17:51:47 +0000 (0:00:00.420) 0:05:22.739 *********** 2025-07-04 17:51:47.568245 | orchestrator | ok: [testbed-manager] 2025-07-04 17:51:47.610614 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:51:47.650297 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:51:47.684736 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:51:47.768007 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:51:47.768489 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:51:47.769388 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:51:47.770122 | orchestrator | 2025-07-04 17:51:47.773272 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-07-04 17:51:47.773301 | orchestrator | Friday 04 July 2025 17:51:47 +0000 (0:00:00.317) 0:05:23.056 *********** 2025-07-04 17:51:47.879623 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:51:47.916956 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:51:47.958621 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:51:47.999360 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:51:48.058059 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:51:48.058154 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:51:48.058218 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:51:48.060160 | orchestrator | 2025-07-04 17:51:48.061083 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-07-04 17:51:48.062941 | orchestrator | Friday 04 July 2025 17:51:48 +0000 (0:00:00.291) 0:05:23.347 *********** 2025-07-04 17:51:48.174171 | orchestrator | ok: [testbed-manager] 2025-07-04 17:51:48.211381 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:51:48.264502 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:51:48.299263 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:51:48.385065 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:51:48.386270 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:51:48.387193 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:51:48.388480 | orchestrator | 2025-07-04 17:51:48.388794 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-07-04 17:51:48.390246 | orchestrator | Friday 04 July 2025 17:51:48 +0000 (0:00:00.324) 0:05:23.672 *********** 2025-07-04 17:51:48.500704 | orchestrator | ok: [testbed-manager] =>  2025-07-04 17:51:48.500890 | orchestrator |  docker_version: 5:27.5.1 2025-07-04 17:51:48.537710 | orchestrator | ok: [testbed-node-3] =>  2025-07-04 17:51:48.537805 | orchestrator |  docker_version: 5:27.5.1 2025-07-04 17:51:48.574735 | orchestrator | ok: [testbed-node-4] =>  2025-07-04 17:51:48.576423 | orchestrator |  docker_version: 5:27.5.1 2025-07-04 17:51:48.612130 | orchestrator | ok: [testbed-node-5] =>  2025-07-04 17:51:48.614125 | orchestrator |  docker_version: 5:27.5.1 2025-07-04 17:51:48.688999 | orchestrator | ok: [testbed-node-0] =>  2025-07-04 17:51:48.690286 | orchestrator |  docker_version: 5:27.5.1 2025-07-04 17:51:48.692039 | orchestrator | ok: [testbed-node-1] =>  2025-07-04 17:51:48.695481 | orchestrator |  docker_version: 5:27.5.1 2025-07-04 17:51:48.700043 | orchestrator | ok: [testbed-node-2] =>  2025-07-04 17:51:48.701442 | orchestrator |  docker_version: 5:27.5.1 2025-07-04 17:51:48.702271 | orchestrator | 2025-07-04 17:51:48.703325 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-07-04 17:51:48.703885 | orchestrator | Friday 04 July 2025 17:51:48 +0000 (0:00:00.305) 0:05:23.978 *********** 2025-07-04 17:51:48.830981 | orchestrator | ok: [testbed-manager] =>  2025-07-04 17:51:48.831362 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-04 17:51:48.988757 | orchestrator | ok: [testbed-node-3] =>  2025-07-04 17:51:48.990511 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-04 17:51:49.049520 | orchestrator | ok: [testbed-node-4] =>  2025-07-04 17:51:49.049961 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-04 17:51:49.086295 | orchestrator | ok: [testbed-node-5] =>  2025-07-04 17:51:49.087234 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-04 17:51:49.159067 | orchestrator | ok: [testbed-node-0] =>  2025-07-04 17:51:49.159982 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-04 17:51:49.160524 | orchestrator | ok: [testbed-node-1] =>  2025-07-04 17:51:49.163673 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-04 17:51:49.164295 | orchestrator | ok: [testbed-node-2] =>  2025-07-04 17:51:49.165040 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-04 17:51:49.165793 | orchestrator | 2025-07-04 17:51:49.166944 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-07-04 17:51:49.167148 | orchestrator | Friday 04 July 2025 17:51:49 +0000 (0:00:00.468) 0:05:24.446 *********** 2025-07-04 17:51:49.246387 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:51:49.292198 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:51:49.325315 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:51:49.357135 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:51:49.391282 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:51:49.454334 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:51:49.454932 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:51:49.455790 | orchestrator | 2025-07-04 17:51:49.456591 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-07-04 17:51:49.457471 | orchestrator | Friday 04 July 2025 17:51:49 +0000 (0:00:00.296) 0:05:24.743 *********** 2025-07-04 17:51:49.573252 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:51:49.610750 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:51:49.645468 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:51:49.721247 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:51:49.798451 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:51:49.799917 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:51:49.800529 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:51:49.801375 | orchestrator | 2025-07-04 17:51:49.801886 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-07-04 17:51:49.802568 | orchestrator | Friday 04 July 2025 17:51:49 +0000 (0:00:00.339) 0:05:25.082 *********** 2025-07-04 17:51:50.236311 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:51:50.237679 | orchestrator | 2025-07-04 17:51:50.238846 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-07-04 17:51:50.239692 | orchestrator | Friday 04 July 2025 17:51:50 +0000 (0:00:00.439) 0:05:25.522 *********** 2025-07-04 17:51:51.124550 | orchestrator | ok: [testbed-manager] 2025-07-04 17:51:51.126423 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:51:51.127584 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:51:51.128485 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:51:51.129134 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:51:51.129802 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:51:51.130658 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:51:51.130968 | orchestrator | 2025-07-04 17:51:51.131976 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-07-04 17:51:51.132371 | orchestrator | Friday 04 July 2025 17:51:51 +0000 (0:00:00.886) 0:05:26.408 *********** 2025-07-04 17:51:53.911995 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:51:53.912138 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:51:53.912324 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:51:53.913178 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:51:53.914129 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:51:53.914370 | orchestrator | ok: [testbed-manager] 2025-07-04 17:51:53.915644 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:51:53.919068 | orchestrator | 2025-07-04 17:51:53.919110 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-07-04 17:51:53.919121 | orchestrator | Friday 04 July 2025 17:51:53 +0000 (0:00:02.790) 0:05:29.199 *********** 2025-07-04 17:51:53.994217 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-07-04 17:51:53.995416 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-07-04 17:51:54.073553 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-07-04 17:51:54.074650 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-07-04 17:51:54.077395 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-07-04 17:51:54.156860 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:51:54.157739 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-07-04 17:51:54.284299 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:51:54.284710 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-07-04 17:51:54.285984 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-07-04 17:51:54.286778 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-07-04 17:51:54.287243 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-07-04 17:51:54.289202 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-07-04 17:51:54.289231 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-07-04 17:51:54.526440 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:51:54.527206 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-07-04 17:51:54.528518 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-07-04 17:51:54.529371 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-07-04 17:51:54.601315 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:51:54.601528 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-07-04 17:51:54.602830 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-07-04 17:51:54.605830 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-07-04 17:51:54.760099 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:51:54.760432 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:51:54.762267 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-07-04 17:51:54.765377 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-07-04 17:51:54.765459 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-07-04 17:51:54.765473 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:51:54.765486 | orchestrator | 2025-07-04 17:51:54.765499 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-07-04 17:51:54.765564 | orchestrator | Friday 04 July 2025 17:51:54 +0000 (0:00:00.847) 0:05:30.047 *********** 2025-07-04 17:52:01.198984 | orchestrator | ok: [testbed-manager] 2025-07-04 17:52:01.199131 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:52:01.199149 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:52:01.199678 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:52:01.200326 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:52:01.202274 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:52:01.203275 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:52:01.203926 | orchestrator | 2025-07-04 17:52:01.205144 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-07-04 17:52:01.206871 | orchestrator | Friday 04 July 2025 17:52:01 +0000 (0:00:06.436) 0:05:36.483 *********** 2025-07-04 17:52:02.268090 | orchestrator | ok: [testbed-manager] 2025-07-04 17:52:02.268871 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:52:02.270218 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:52:02.271978 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:52:02.273164 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:52:02.274192 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:52:02.275149 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:52:02.276340 | orchestrator | 2025-07-04 17:52:02.276874 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-07-04 17:52:02.277590 | orchestrator | Friday 04 July 2025 17:52:02 +0000 (0:00:01.069) 0:05:37.552 *********** 2025-07-04 17:52:10.117169 | orchestrator | ok: [testbed-manager] 2025-07-04 17:52:10.118252 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:52:10.121874 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:52:10.123466 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:52:10.124941 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:52:10.126879 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:52:10.127964 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:52:10.128678 | orchestrator | 2025-07-04 17:52:10.130849 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-07-04 17:52:10.132492 | orchestrator | Friday 04 July 2025 17:52:10 +0000 (0:00:07.850) 0:05:45.403 *********** 2025-07-04 17:52:13.343911 | orchestrator | changed: [testbed-manager] 2025-07-04 17:52:13.344415 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:52:13.346500 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:52:13.347987 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:52:13.349209 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:52:13.350090 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:52:13.351101 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:52:13.352679 | orchestrator | 2025-07-04 17:52:13.353444 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-07-04 17:52:13.353673 | orchestrator | Friday 04 July 2025 17:52:13 +0000 (0:00:03.227) 0:05:48.630 *********** 2025-07-04 17:52:15.030749 | orchestrator | ok: [testbed-manager] 2025-07-04 17:52:15.030857 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:52:15.032466 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:52:15.033475 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:52:15.034187 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:52:15.035516 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:52:15.037341 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:52:15.037381 | orchestrator | 2025-07-04 17:52:15.037635 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-07-04 17:52:15.038694 | orchestrator | Friday 04 July 2025 17:52:15 +0000 (0:00:01.684) 0:05:50.315 *********** 2025-07-04 17:52:16.341076 | orchestrator | ok: [testbed-manager] 2025-07-04 17:52:16.344806 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:52:16.344860 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:52:16.344872 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:52:16.344883 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:52:16.346254 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:52:16.347080 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:52:16.348245 | orchestrator | 2025-07-04 17:52:16.348700 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-07-04 17:52:16.349680 | orchestrator | Friday 04 July 2025 17:52:16 +0000 (0:00:01.309) 0:05:51.625 *********** 2025-07-04 17:52:16.568290 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:52:16.641964 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:52:16.709886 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:52:16.776615 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:52:17.005404 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:52:17.006706 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:52:17.007002 | orchestrator | changed: [testbed-manager] 2025-07-04 17:52:17.007529 | orchestrator | 2025-07-04 17:52:17.008213 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-07-04 17:52:17.008408 | orchestrator | Friday 04 July 2025 17:52:16 +0000 (0:00:00.668) 0:05:52.294 *********** 2025-07-04 17:52:26.803729 | orchestrator | ok: [testbed-manager] 2025-07-04 17:52:26.804533 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:52:26.805841 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:52:26.807513 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:52:26.809684 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:52:26.810545 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:52:26.811001 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:52:26.812016 | orchestrator | 2025-07-04 17:52:26.812433 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-07-04 17:52:26.813177 | orchestrator | Friday 04 July 2025 17:52:26 +0000 (0:00:09.794) 0:06:02.088 *********** 2025-07-04 17:52:27.686675 | orchestrator | changed: [testbed-manager] 2025-07-04 17:52:27.686792 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:52:27.686809 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:52:27.687494 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:52:27.688585 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:52:27.689496 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:52:27.690284 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:52:27.691015 | orchestrator | 2025-07-04 17:52:27.691620 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-07-04 17:52:27.692091 | orchestrator | Friday 04 July 2025 17:52:27 +0000 (0:00:00.883) 0:06:02.972 *********** 2025-07-04 17:52:37.282975 | orchestrator | ok: [testbed-manager] 2025-07-04 17:52:37.283101 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:52:37.283867 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:52:37.286252 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:52:37.286286 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:52:37.288822 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:52:37.290446 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:52:37.291534 | orchestrator | 2025-07-04 17:52:37.292456 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-07-04 17:52:37.293588 | orchestrator | Friday 04 July 2025 17:52:37 +0000 (0:00:09.597) 0:06:12.569 *********** 2025-07-04 17:52:48.629321 | orchestrator | ok: [testbed-manager] 2025-07-04 17:52:48.629440 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:52:48.629458 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:52:48.629470 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:52:48.632187 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:52:48.633057 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:52:48.633995 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:52:48.635873 | orchestrator | 2025-07-04 17:52:48.636554 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-07-04 17:52:48.637983 | orchestrator | Friday 04 July 2025 17:52:48 +0000 (0:00:11.342) 0:06:23.912 *********** 2025-07-04 17:52:49.019469 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-07-04 17:52:49.874585 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-07-04 17:52:49.874685 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-07-04 17:52:49.877517 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-07-04 17:52:49.877561 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-07-04 17:52:49.877575 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-07-04 17:52:49.879867 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-07-04 17:52:49.880749 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-07-04 17:52:49.881710 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-07-04 17:52:49.882544 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-07-04 17:52:49.882844 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-07-04 17:52:49.883590 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-07-04 17:52:49.884218 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-07-04 17:52:49.884655 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-07-04 17:52:49.885366 | orchestrator | 2025-07-04 17:52:49.885840 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-07-04 17:52:49.886584 | orchestrator | Friday 04 July 2025 17:52:49 +0000 (0:00:01.244) 0:06:25.157 *********** 2025-07-04 17:52:50.011005 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:52:50.079370 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:52:50.151337 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:52:50.217036 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:52:50.296196 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:52:50.441532 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:52:50.442844 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:52:50.443744 | orchestrator | 2025-07-04 17:52:50.444738 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-07-04 17:52:50.445724 | orchestrator | Friday 04 July 2025 17:52:50 +0000 (0:00:00.573) 0:06:25.730 *********** 2025-07-04 17:52:54.373504 | orchestrator | ok: [testbed-manager] 2025-07-04 17:52:54.373695 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:52:54.375861 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:52:54.376152 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:52:54.378309 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:52:54.379109 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:52:54.379856 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:52:54.380593 | orchestrator | 2025-07-04 17:52:54.381281 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-07-04 17:52:54.382295 | orchestrator | Friday 04 July 2025 17:52:54 +0000 (0:00:03.928) 0:06:29.658 *********** 2025-07-04 17:52:54.520116 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:52:54.585666 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:52:54.666670 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:52:54.744833 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:52:54.806517 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:52:54.914776 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:52:54.915403 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:52:54.916615 | orchestrator | 2025-07-04 17:52:54.916821 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-07-04 17:52:54.917348 | orchestrator | Friday 04 July 2025 17:52:54 +0000 (0:00:00.543) 0:06:30.202 *********** 2025-07-04 17:52:54.992673 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-07-04 17:52:54.993238 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-07-04 17:52:55.064301 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:52:55.065206 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-07-04 17:52:55.066265 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-07-04 17:52:55.151294 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:52:55.152243 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-07-04 17:52:55.153257 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-07-04 17:52:55.261136 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:52:55.261362 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-07-04 17:52:55.262670 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-07-04 17:52:55.335013 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:52:55.335232 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-07-04 17:52:55.335255 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-07-04 17:52:55.406840 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:52:55.408362 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-07-04 17:52:55.408942 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-07-04 17:52:55.530892 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:52:55.532840 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-07-04 17:52:55.533952 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-07-04 17:52:55.535787 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:52:55.536507 | orchestrator | 2025-07-04 17:52:55.537127 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-07-04 17:52:55.538254 | orchestrator | Friday 04 July 2025 17:52:55 +0000 (0:00:00.617) 0:06:30.819 *********** 2025-07-04 17:52:55.668065 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:52:55.740723 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:52:55.806254 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:52:55.872563 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:52:55.943527 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:52:56.043865 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:52:56.044957 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:52:56.046280 | orchestrator | 2025-07-04 17:52:56.047405 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-07-04 17:52:56.048148 | orchestrator | Friday 04 July 2025 17:52:56 +0000 (0:00:00.510) 0:06:31.330 *********** 2025-07-04 17:52:56.177046 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:52:56.242861 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:52:56.308640 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:52:56.380994 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:52:56.446655 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:52:56.546370 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:52:56.546873 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:52:56.548015 | orchestrator | 2025-07-04 17:52:56.549392 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-07-04 17:52:56.550744 | orchestrator | Friday 04 July 2025 17:52:56 +0000 (0:00:00.502) 0:06:31.832 *********** 2025-07-04 17:52:56.684119 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:52:56.753586 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:52:56.823726 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:52:57.059234 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:52:57.127198 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:52:57.253685 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:52:57.258009 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:52:57.258117 | orchestrator | 2025-07-04 17:52:57.258142 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-07-04 17:52:57.258221 | orchestrator | Friday 04 July 2025 17:52:57 +0000 (0:00:00.707) 0:06:32.539 *********** 2025-07-04 17:52:58.961641 | orchestrator | ok: [testbed-manager] 2025-07-04 17:52:58.961857 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:52:58.962805 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:52:58.964195 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:52:58.965950 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:52:58.967162 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:52:58.967562 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:52:58.968815 | orchestrator | 2025-07-04 17:52:58.968994 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-07-04 17:52:58.969898 | orchestrator | Friday 04 July 2025 17:52:58 +0000 (0:00:01.709) 0:06:34.248 *********** 2025-07-04 17:52:59.854406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:52:59.858777 | orchestrator | 2025-07-04 17:52:59.858828 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-07-04 17:52:59.859338 | orchestrator | Friday 04 July 2025 17:52:59 +0000 (0:00:00.891) 0:06:35.140 *********** 2025-07-04 17:53:00.274370 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:00.720081 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:53:00.720709 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:53:00.722620 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:53:00.722719 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:53:00.723476 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:53:00.724008 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:53:00.724941 | orchestrator | 2025-07-04 17:53:00.725543 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-07-04 17:53:00.729504 | orchestrator | Friday 04 July 2025 17:53:00 +0000 (0:00:00.866) 0:06:36.007 *********** 2025-07-04 17:53:01.226587 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:53:01.303817 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:01.807208 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:53:01.811604 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:53:01.814362 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:53:01.815469 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:53:01.816551 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:53:01.817576 | orchestrator | 2025-07-04 17:53:01.818826 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-07-04 17:53:01.819076 | orchestrator | Friday 04 July 2025 17:53:01 +0000 (0:00:01.086) 0:06:37.094 *********** 2025-07-04 17:53:03.156707 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:03.158782 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:53:03.160864 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:53:03.161778 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:53:03.163974 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:53:03.165992 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:53:03.167035 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:53:03.168432 | orchestrator | 2025-07-04 17:53:03.169164 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-07-04 17:53:03.170428 | orchestrator | Friday 04 July 2025 17:53:03 +0000 (0:00:01.347) 0:06:38.442 *********** 2025-07-04 17:53:03.285330 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:53:04.507432 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:04.507539 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:04.508304 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:04.512253 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:04.513000 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:04.513796 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:04.515172 | orchestrator | 2025-07-04 17:53:04.516098 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-07-04 17:53:04.517079 | orchestrator | Friday 04 July 2025 17:53:04 +0000 (0:00:01.349) 0:06:39.791 *********** 2025-07-04 17:53:05.848778 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:05.848888 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:53:05.849507 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:53:05.850193 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:53:05.850445 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:53:05.852571 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:53:05.853202 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:53:05.853394 | orchestrator | 2025-07-04 17:53:05.853983 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-07-04 17:53:05.854413 | orchestrator | Friday 04 July 2025 17:53:05 +0000 (0:00:01.341) 0:06:41.133 *********** 2025-07-04 17:53:07.360970 | orchestrator | changed: [testbed-manager] 2025-07-04 17:53:07.361086 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:53:07.361467 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:53:07.361596 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:53:07.362132 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:53:07.365472 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:53:07.365533 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:53:07.365545 | orchestrator | 2025-07-04 17:53:07.365557 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-07-04 17:53:07.365569 | orchestrator | Friday 04 July 2025 17:53:07 +0000 (0:00:01.515) 0:06:42.648 *********** 2025-07-04 17:53:08.457602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:53:08.457778 | orchestrator | 2025-07-04 17:53:08.458426 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-07-04 17:53:08.458535 | orchestrator | Friday 04 July 2025 17:53:08 +0000 (0:00:01.096) 0:06:43.745 *********** 2025-07-04 17:53:09.841044 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:09.841433 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:09.842253 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:09.843615 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:09.844550 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:09.845788 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:09.846631 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:09.847675 | orchestrator | 2025-07-04 17:53:09.849183 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-07-04 17:53:09.849263 | orchestrator | Friday 04 July 2025 17:53:09 +0000 (0:00:01.383) 0:06:45.128 *********** 2025-07-04 17:53:10.991560 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:10.991679 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:10.991702 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:10.992269 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:10.993380 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:10.994354 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:10.995539 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:10.996481 | orchestrator | 2025-07-04 17:53:10.997289 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-07-04 17:53:10.998105 | orchestrator | Friday 04 July 2025 17:53:10 +0000 (0:00:01.146) 0:06:46.275 *********** 2025-07-04 17:53:12.342874 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:12.343063 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:12.343953 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:12.344472 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:12.345011 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:12.345530 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:12.347532 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:12.347976 | orchestrator | 2025-07-04 17:53:12.348719 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-07-04 17:53:12.348852 | orchestrator | Friday 04 July 2025 17:53:12 +0000 (0:00:01.350) 0:06:47.626 *********** 2025-07-04 17:53:13.466001 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:13.466159 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:13.466174 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:13.466606 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:13.467650 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:13.468449 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:13.469878 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:13.470713 | orchestrator | 2025-07-04 17:53:13.471448 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-07-04 17:53:13.471685 | orchestrator | Friday 04 July 2025 17:53:13 +0000 (0:00:01.119) 0:06:48.746 *********** 2025-07-04 17:53:14.639065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:53:14.641943 | orchestrator | 2025-07-04 17:53:14.644708 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-04 17:53:14.646502 | orchestrator | Friday 04 July 2025 17:53:14 +0000 (0:00:00.889) 0:06:49.635 *********** 2025-07-04 17:53:14.647427 | orchestrator | 2025-07-04 17:53:14.648277 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-04 17:53:14.649466 | orchestrator | Friday 04 July 2025 17:53:14 +0000 (0:00:00.039) 0:06:49.675 *********** 2025-07-04 17:53:14.650488 | orchestrator | 2025-07-04 17:53:14.651305 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-04 17:53:14.652129 | orchestrator | Friday 04 July 2025 17:53:14 +0000 (0:00:00.046) 0:06:49.721 *********** 2025-07-04 17:53:14.653020 | orchestrator | 2025-07-04 17:53:14.654090 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-04 17:53:14.654988 | orchestrator | Friday 04 July 2025 17:53:14 +0000 (0:00:00.039) 0:06:49.761 *********** 2025-07-04 17:53:14.655660 | orchestrator | 2025-07-04 17:53:14.656127 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-04 17:53:14.657504 | orchestrator | Friday 04 July 2025 17:53:14 +0000 (0:00:00.038) 0:06:49.799 *********** 2025-07-04 17:53:14.658548 | orchestrator | 2025-07-04 17:53:14.658746 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-04 17:53:14.661447 | orchestrator | Friday 04 July 2025 17:53:14 +0000 (0:00:00.045) 0:06:49.844 *********** 2025-07-04 17:53:14.661488 | orchestrator | 2025-07-04 17:53:14.661508 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-04 17:53:14.661527 | orchestrator | Friday 04 July 2025 17:53:14 +0000 (0:00:00.039) 0:06:49.884 *********** 2025-07-04 17:53:14.662122 | orchestrator | 2025-07-04 17:53:14.663107 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-04 17:53:14.663779 | orchestrator | Friday 04 July 2025 17:53:14 +0000 (0:00:00.039) 0:06:49.923 *********** 2025-07-04 17:53:15.999617 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:15.999784 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:16.000604 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:16.001749 | orchestrator | 2025-07-04 17:53:16.002141 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-07-04 17:53:16.003326 | orchestrator | Friday 04 July 2025 17:53:15 +0000 (0:00:01.361) 0:06:51.285 *********** 2025-07-04 17:53:17.350999 | orchestrator | changed: [testbed-manager] 2025-07-04 17:53:17.351601 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:53:17.354006 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:53:17.355013 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:53:17.355514 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:53:17.356201 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:53:17.357063 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:53:17.357711 | orchestrator | 2025-07-04 17:53:17.358404 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-07-04 17:53:17.359130 | orchestrator | Friday 04 July 2025 17:53:17 +0000 (0:00:01.350) 0:06:52.635 *********** 2025-07-04 17:53:18.550205 | orchestrator | changed: [testbed-manager] 2025-07-04 17:53:18.551218 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:53:18.552648 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:53:18.554310 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:53:18.555332 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:53:18.556332 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:53:18.557464 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:53:18.558944 | orchestrator | 2025-07-04 17:53:18.559691 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-07-04 17:53:18.560834 | orchestrator | Friday 04 July 2025 17:53:18 +0000 (0:00:01.199) 0:06:53.834 *********** 2025-07-04 17:53:18.694197 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:53:20.838779 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:53:20.843557 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:53:20.844201 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:53:20.845607 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:53:20.846629 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:53:20.847719 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:53:20.848520 | orchestrator | 2025-07-04 17:53:20.850197 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-07-04 17:53:20.850224 | orchestrator | Friday 04 July 2025 17:53:20 +0000 (0:00:02.286) 0:06:56.121 *********** 2025-07-04 17:53:20.939732 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:53:20.941250 | orchestrator | 2025-07-04 17:53:20.941657 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-07-04 17:53:20.943258 | orchestrator | Friday 04 July 2025 17:53:20 +0000 (0:00:00.105) 0:06:56.227 *********** 2025-07-04 17:53:21.958742 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:21.958874 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:53:21.961199 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:53:21.962643 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:53:21.963489 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:53:21.964497 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:53:21.965434 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:53:21.966437 | orchestrator | 2025-07-04 17:53:21.967038 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-07-04 17:53:21.967832 | orchestrator | Friday 04 July 2025 17:53:21 +0000 (0:00:01.015) 0:06:57.243 *********** 2025-07-04 17:53:22.305815 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:53:22.376340 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:53:22.451434 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:53:22.527553 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:53:22.608296 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:53:22.737644 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:53:22.738713 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:53:22.738802 | orchestrator | 2025-07-04 17:53:22.739423 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-07-04 17:53:22.739502 | orchestrator | Friday 04 July 2025 17:53:22 +0000 (0:00:00.781) 0:06:58.024 *********** 2025-07-04 17:53:23.640733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:53:23.641053 | orchestrator | 2025-07-04 17:53:23.642701 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-07-04 17:53:23.643430 | orchestrator | Friday 04 July 2025 17:53:23 +0000 (0:00:00.902) 0:06:58.927 *********** 2025-07-04 17:53:24.063594 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:24.484300 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:24.486140 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:24.486522 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:24.487821 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:24.489410 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:24.489754 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:24.492705 | orchestrator | 2025-07-04 17:53:24.494005 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-07-04 17:53:24.495690 | orchestrator | Friday 04 July 2025 17:53:24 +0000 (0:00:00.845) 0:06:59.773 *********** 2025-07-04 17:53:27.165961 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-07-04 17:53:27.167269 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-07-04 17:53:27.171448 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-07-04 17:53:27.171523 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-07-04 17:53:27.171536 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-07-04 17:53:27.172335 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-07-04 17:53:27.173087 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-07-04 17:53:27.174110 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-07-04 17:53:27.174456 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-07-04 17:53:27.175498 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-07-04 17:53:27.175879 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-07-04 17:53:27.176991 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-07-04 17:53:27.178254 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-07-04 17:53:27.179607 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-07-04 17:53:27.180414 | orchestrator | 2025-07-04 17:53:27.180692 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-07-04 17:53:27.181351 | orchestrator | Friday 04 July 2025 17:53:27 +0000 (0:00:02.677) 0:07:02.450 *********** 2025-07-04 17:53:27.313788 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:53:27.380672 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:53:27.453112 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:53:27.516131 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:53:27.579893 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:53:27.681507 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:53:27.682607 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:53:27.683883 | orchestrator | 2025-07-04 17:53:27.689563 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-07-04 17:53:27.691228 | orchestrator | Friday 04 July 2025 17:53:27 +0000 (0:00:00.520) 0:07:02.970 *********** 2025-07-04 17:53:28.503695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:53:28.505001 | orchestrator | 2025-07-04 17:53:28.509143 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-07-04 17:53:28.509169 | orchestrator | Friday 04 July 2025 17:53:28 +0000 (0:00:00.819) 0:07:03.789 *********** 2025-07-04 17:53:29.116502 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:29.177297 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:29.624594 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:29.625042 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:29.625590 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:29.629062 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:29.629619 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:29.630406 | orchestrator | 2025-07-04 17:53:29.631166 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-07-04 17:53:29.631608 | orchestrator | Friday 04 July 2025 17:53:29 +0000 (0:00:01.121) 0:07:04.911 *********** 2025-07-04 17:53:30.027979 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:30.104878 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:30.539334 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:30.540803 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:30.543974 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:30.545023 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:30.546261 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:30.547625 | orchestrator | 2025-07-04 17:53:30.548569 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-07-04 17:53:30.549395 | orchestrator | Friday 04 July 2025 17:53:30 +0000 (0:00:00.913) 0:07:05.825 *********** 2025-07-04 17:53:30.691737 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:53:30.770700 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:53:30.836588 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:53:30.908806 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:53:30.974899 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:53:31.068540 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:53:31.070608 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:53:31.072646 | orchestrator | 2025-07-04 17:53:31.074104 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-07-04 17:53:31.075470 | orchestrator | Friday 04 July 2025 17:53:31 +0000 (0:00:00.529) 0:07:06.355 *********** 2025-07-04 17:53:32.527215 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:32.527319 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:32.528406 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:32.528713 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:32.531353 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:32.532207 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:32.533018 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:32.534242 | orchestrator | 2025-07-04 17:53:32.534847 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-07-04 17:53:32.535651 | orchestrator | Friday 04 July 2025 17:53:32 +0000 (0:00:01.458) 0:07:07.813 *********** 2025-07-04 17:53:32.670554 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:53:32.741347 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:53:32.817503 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:53:32.883988 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:53:32.958209 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:53:33.047552 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:53:33.047960 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:53:33.049382 | orchestrator | 2025-07-04 17:53:33.050596 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-07-04 17:53:33.051496 | orchestrator | Friday 04 July 2025 17:53:33 +0000 (0:00:00.520) 0:07:08.334 *********** 2025-07-04 17:53:41.301007 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:41.301818 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:53:41.303585 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:53:41.303684 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:53:41.304840 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:53:41.306698 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:53:41.307705 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:53:41.308200 | orchestrator | 2025-07-04 17:53:41.308904 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-07-04 17:53:41.309405 | orchestrator | Friday 04 July 2025 17:53:41 +0000 (0:00:08.252) 0:07:16.586 *********** 2025-07-04 17:53:42.658904 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:42.659187 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:53:42.659995 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:53:42.660883 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:53:42.661491 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:53:42.662254 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:53:42.662941 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:53:42.664485 | orchestrator | 2025-07-04 17:53:42.665068 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-07-04 17:53:42.666817 | orchestrator | Friday 04 July 2025 17:53:42 +0000 (0:00:01.359) 0:07:17.945 *********** 2025-07-04 17:53:44.439257 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:44.440462 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:53:44.441216 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:53:44.442327 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:53:44.444150 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:53:44.444828 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:53:44.445510 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:53:44.446480 | orchestrator | 2025-07-04 17:53:44.447017 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-07-04 17:53:44.447631 | orchestrator | Friday 04 July 2025 17:53:44 +0000 (0:00:01.778) 0:07:19.724 *********** 2025-07-04 17:53:46.172251 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:46.172967 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:53:46.174132 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:53:46.178488 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:53:46.179445 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:53:46.180666 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:53:46.181800 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:53:46.182621 | orchestrator | 2025-07-04 17:53:46.183600 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-04 17:53:46.184539 | orchestrator | Friday 04 July 2025 17:53:46 +0000 (0:00:01.732) 0:07:21.457 *********** 2025-07-04 17:53:46.615595 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:47.277341 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:47.278582 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:47.279530 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:47.280432 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:47.280945 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:47.281543 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:47.282509 | orchestrator | 2025-07-04 17:53:47.282998 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-04 17:53:47.283742 | orchestrator | Friday 04 July 2025 17:53:47 +0000 (0:00:01.108) 0:07:22.566 *********** 2025-07-04 17:53:47.421567 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:53:47.494383 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:53:47.562604 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:53:47.627969 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:53:47.698576 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:53:48.106197 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:53:48.106972 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:53:48.107710 | orchestrator | 2025-07-04 17:53:48.108967 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-07-04 17:53:48.110085 | orchestrator | Friday 04 July 2025 17:53:48 +0000 (0:00:00.825) 0:07:23.391 *********** 2025-07-04 17:53:48.244371 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:53:48.311142 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:53:48.386721 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:53:48.451636 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:53:48.517015 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:53:48.644888 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:53:48.648492 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:53:48.648542 | orchestrator | 2025-07-04 17:53:48.648959 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-07-04 17:53:48.650204 | orchestrator | Friday 04 July 2025 17:53:48 +0000 (0:00:00.539) 0:07:23.931 *********** 2025-07-04 17:53:48.772770 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:48.843496 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:48.909801 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:48.978615 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:49.215986 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:49.326540 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:49.327095 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:49.328513 | orchestrator | 2025-07-04 17:53:49.330108 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-07-04 17:53:49.331025 | orchestrator | Friday 04 July 2025 17:53:49 +0000 (0:00:00.681) 0:07:24.613 *********** 2025-07-04 17:53:49.466373 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:49.531970 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:49.595435 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:49.665060 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:49.752127 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:49.872365 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:49.872469 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:49.873646 | orchestrator | 2025-07-04 17:53:49.876545 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-07-04 17:53:49.876618 | orchestrator | Friday 04 July 2025 17:53:49 +0000 (0:00:00.544) 0:07:25.157 *********** 2025-07-04 17:53:50.023416 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:50.093851 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:50.166448 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:50.235648 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:50.302628 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:50.416541 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:50.417124 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:50.421384 | orchestrator | 2025-07-04 17:53:50.421416 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-07-04 17:53:50.421430 | orchestrator | Friday 04 July 2025 17:53:50 +0000 (0:00:00.546) 0:07:25.704 *********** 2025-07-04 17:53:56.079058 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:56.079607 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:56.080693 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:56.082465 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:56.082564 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:56.083464 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:56.084348 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:56.084844 | orchestrator | 2025-07-04 17:53:56.085732 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-07-04 17:53:56.086279 | orchestrator | Friday 04 July 2025 17:53:56 +0000 (0:00:05.661) 0:07:31.365 *********** 2025-07-04 17:53:56.215140 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:53:56.278999 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:53:56.341877 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:53:56.413522 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:53:56.478280 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:53:56.592957 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:53:56.593127 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:53:56.593146 | orchestrator | 2025-07-04 17:53:56.593948 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-07-04 17:53:56.594395 | orchestrator | Friday 04 July 2025 17:53:56 +0000 (0:00:00.515) 0:07:31.880 *********** 2025-07-04 17:53:57.612736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:53:57.613020 | orchestrator | 2025-07-04 17:53:57.614247 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-07-04 17:53:57.615191 | orchestrator | Friday 04 July 2025 17:53:57 +0000 (0:00:01.018) 0:07:32.899 *********** 2025-07-04 17:53:59.762233 | orchestrator | ok: [testbed-manager] 2025-07-04 17:53:59.762366 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:53:59.762575 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:53:59.763230 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:53:59.765256 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:53:59.767339 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:53:59.768021 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:53:59.769064 | orchestrator | 2025-07-04 17:53:59.769739 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-07-04 17:53:59.770234 | orchestrator | Friday 04 July 2025 17:53:59 +0000 (0:00:02.147) 0:07:35.046 *********** 2025-07-04 17:54:00.896849 | orchestrator | ok: [testbed-manager] 2025-07-04 17:54:00.897650 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:54:00.899094 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:54:00.900219 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:54:00.901388 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:54:00.901767 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:54:00.903547 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:54:00.904879 | orchestrator | 2025-07-04 17:54:00.906089 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-07-04 17:54:00.906646 | orchestrator | Friday 04 July 2025 17:54:00 +0000 (0:00:01.137) 0:07:36.183 *********** 2025-07-04 17:54:02.000367 | orchestrator | ok: [testbed-manager] 2025-07-04 17:54:02.001357 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:54:02.002556 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:54:02.003683 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:54:02.004850 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:54:02.005497 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:54:02.006519 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:54:02.007995 | orchestrator | 2025-07-04 17:54:02.009089 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-07-04 17:54:02.009719 | orchestrator | Friday 04 July 2025 17:54:01 +0000 (0:00:01.102) 0:07:37.286 *********** 2025-07-04 17:54:03.692976 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-04 17:54:03.693115 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-04 17:54:03.693142 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-04 17:54:03.694387 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-04 17:54:03.695653 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-04 17:54:03.696016 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-04 17:54:03.697763 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-04 17:54:03.698948 | orchestrator | 2025-07-04 17:54:03.700173 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-07-04 17:54:03.701255 | orchestrator | Friday 04 July 2025 17:54:03 +0000 (0:00:01.690) 0:07:38.976 *********** 2025-07-04 17:54:04.515482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:54:04.516374 | orchestrator | 2025-07-04 17:54:04.517470 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-07-04 17:54:04.518631 | orchestrator | Friday 04 July 2025 17:54:04 +0000 (0:00:00.824) 0:07:39.800 *********** 2025-07-04 17:54:13.814661 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:54:13.816281 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:54:13.817677 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:54:13.819764 | orchestrator | changed: [testbed-manager] 2025-07-04 17:54:13.821492 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:54:13.822660 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:54:13.824040 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:54:13.825712 | orchestrator | 2025-07-04 17:54:13.826806 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-07-04 17:54:13.827717 | orchestrator | Friday 04 July 2025 17:54:13 +0000 (0:00:09.299) 0:07:49.100 *********** 2025-07-04 17:54:16.273652 | orchestrator | ok: [testbed-manager] 2025-07-04 17:54:16.274938 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:54:16.278248 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:54:16.278310 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:54:16.279353 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:54:16.279781 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:54:16.280783 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:54:16.281647 | orchestrator | 2025-07-04 17:54:16.282336 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-07-04 17:54:16.282916 | orchestrator | Friday 04 July 2025 17:54:16 +0000 (0:00:02.457) 0:07:51.558 *********** 2025-07-04 17:54:17.512383 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:54:17.513560 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:54:17.514737 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:54:17.515564 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:54:17.517277 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:54:17.518716 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:54:17.520100 | orchestrator | 2025-07-04 17:54:17.521113 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-07-04 17:54:17.522153 | orchestrator | Friday 04 July 2025 17:54:17 +0000 (0:00:01.238) 0:07:52.797 *********** 2025-07-04 17:54:19.042483 | orchestrator | changed: [testbed-manager] 2025-07-04 17:54:19.043184 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:54:19.047059 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:54:19.047098 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:54:19.047107 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:54:19.047115 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:54:19.047310 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:54:19.047973 | orchestrator | 2025-07-04 17:54:19.048418 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-07-04 17:54:19.048844 | orchestrator | 2025-07-04 17:54:19.049244 | orchestrator | TASK [Include hardening role] ************************************************** 2025-07-04 17:54:19.049685 | orchestrator | Friday 04 July 2025 17:54:19 +0000 (0:00:01.532) 0:07:54.329 *********** 2025-07-04 17:54:19.175448 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:54:19.236435 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:54:19.298197 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:54:19.365590 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:54:19.429805 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:54:19.546991 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:54:19.547738 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:54:19.548950 | orchestrator | 2025-07-04 17:54:19.549822 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-07-04 17:54:19.551232 | orchestrator | 2025-07-04 17:54:19.552144 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-07-04 17:54:19.552937 | orchestrator | Friday 04 July 2025 17:54:19 +0000 (0:00:00.504) 0:07:54.834 *********** 2025-07-04 17:54:20.907606 | orchestrator | changed: [testbed-manager] 2025-07-04 17:54:20.907732 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:54:20.909004 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:54:20.909549 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:54:20.910098 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:54:20.910818 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:54:20.911337 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:54:20.914225 | orchestrator | 2025-07-04 17:54:20.914273 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-07-04 17:54:20.914285 | orchestrator | Friday 04 July 2025 17:54:20 +0000 (0:00:01.358) 0:07:56.192 *********** 2025-07-04 17:54:22.406423 | orchestrator | ok: [testbed-manager] 2025-07-04 17:54:22.409045 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:54:22.410119 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:54:22.411300 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:54:22.412224 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:54:22.413037 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:54:22.413971 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:54:22.415402 | orchestrator | 2025-07-04 17:54:22.415518 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-07-04 17:54:22.416074 | orchestrator | Friday 04 July 2025 17:54:22 +0000 (0:00:01.485) 0:07:57.677 *********** 2025-07-04 17:54:22.729941 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:54:22.796216 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:54:22.883639 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:54:22.967075 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:54:23.050592 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:54:23.458298 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:54:23.461802 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:54:23.461924 | orchestrator | 2025-07-04 17:54:23.463628 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-07-04 17:54:23.464412 | orchestrator | Friday 04 July 2025 17:54:23 +0000 (0:00:01.067) 0:07:58.745 *********** 2025-07-04 17:54:24.812391 | orchestrator | changed: [testbed-manager] 2025-07-04 17:54:24.815545 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:54:24.815597 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:54:24.815610 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:54:24.816596 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:54:24.817383 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:54:24.818073 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:54:24.819195 | orchestrator | 2025-07-04 17:54:24.819918 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-07-04 17:54:24.821824 | orchestrator | 2025-07-04 17:54:24.821996 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-07-04 17:54:24.823574 | orchestrator | Friday 04 July 2025 17:54:24 +0000 (0:00:01.354) 0:08:00.099 *********** 2025-07-04 17:54:25.811513 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:54:25.814146 | orchestrator | 2025-07-04 17:54:25.814228 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-04 17:54:25.815474 | orchestrator | Friday 04 July 2025 17:54:25 +0000 (0:00:00.997) 0:08:01.096 *********** 2025-07-04 17:54:26.234898 | orchestrator | ok: [testbed-manager] 2025-07-04 17:54:26.694812 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:54:26.696049 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:54:26.696710 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:54:26.697418 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:54:26.697871 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:54:26.698901 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:54:26.699535 | orchestrator | 2025-07-04 17:54:26.700122 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-04 17:54:26.700608 | orchestrator | Friday 04 July 2025 17:54:26 +0000 (0:00:00.883) 0:08:01.980 *********** 2025-07-04 17:54:27.881888 | orchestrator | changed: [testbed-manager] 2025-07-04 17:54:27.885616 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:54:27.885707 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:54:27.885722 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:54:27.887017 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:54:27.888220 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:54:27.889599 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:54:27.891327 | orchestrator | 2025-07-04 17:54:27.892582 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-07-04 17:54:27.893362 | orchestrator | Friday 04 July 2025 17:54:27 +0000 (0:00:01.188) 0:08:03.168 *********** 2025-07-04 17:54:28.879293 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 17:54:28.880601 | orchestrator | 2025-07-04 17:54:28.883461 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-04 17:54:28.884134 | orchestrator | Friday 04 July 2025 17:54:28 +0000 (0:00:00.997) 0:08:04.165 *********** 2025-07-04 17:54:29.299912 | orchestrator | ok: [testbed-manager] 2025-07-04 17:54:29.758772 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:54:29.759463 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:54:29.760324 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:54:29.764067 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:54:29.764175 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:54:29.764189 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:54:29.764254 | orchestrator | 2025-07-04 17:54:29.765094 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-04 17:54:29.766005 | orchestrator | Friday 04 July 2025 17:54:29 +0000 (0:00:00.878) 0:08:05.043 *********** 2025-07-04 17:54:30.197078 | orchestrator | changed: [testbed-manager] 2025-07-04 17:54:30.867101 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:54:30.867198 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:54:30.868327 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:54:30.869410 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:54:30.871011 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:54:30.872488 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:54:30.873892 | orchestrator | 2025-07-04 17:54:30.874722 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:54:30.875936 | orchestrator | 2025-07-04 17:54:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:54:30.875965 | orchestrator | 2025-07-04 17:54:30 | INFO  | Please wait and do not abort execution. 2025-07-04 17:54:30.877218 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-07-04 17:54:30.877942 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-04 17:54:30.878963 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-04 17:54:30.880429 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-04 17:54:30.881341 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-07-04 17:54:30.882586 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-04 17:54:30.883257 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-04 17:54:30.884435 | orchestrator | 2025-07-04 17:54:30.885176 | orchestrator | 2025-07-04 17:54:30.886367 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:54:30.887269 | orchestrator | Friday 04 July 2025 17:54:30 +0000 (0:00:01.108) 0:08:06.152 *********** 2025-07-04 17:54:30.887912 | orchestrator | =============================================================================== 2025-07-04 17:54:30.888890 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.49s 2025-07-04 17:54:30.889699 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.61s 2025-07-04 17:54:30.890976 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.26s 2025-07-04 17:54:30.891390 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.36s 2025-07-04 17:54:30.892228 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.98s 2025-07-04 17:54:30.892652 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.93s 2025-07-04 17:54:30.893700 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.34s 2025-07-04 17:54:30.894322 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.79s 2025-07-04 17:54:30.895337 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.60s 2025-07-04 17:54:30.895516 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.30s 2025-07-04 17:54:30.896220 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.25s 2025-07-04 17:54:30.896998 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.24s 2025-07-04 17:54:30.897037 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.16s 2025-07-04 17:54:30.897771 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.93s 2025-07-04 17:54:30.898151 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.85s 2025-07-04 17:54:30.898633 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.78s 2025-07-04 17:54:30.898950 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------- 6.83s 2025-07-04 17:54:30.899247 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.50s 2025-07-04 17:54:30.899552 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.44s 2025-07-04 17:54:30.900010 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.79s 2025-07-04 17:54:31.609360 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-04 17:54:31.609461 | orchestrator | + osism apply network 2025-07-04 17:54:33.847112 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:54:33.847202 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:54:33.847209 | orchestrator | Registering Redlock._release_script 2025-07-04 17:54:33.928325 | orchestrator | 2025-07-04 17:54:33 | INFO  | Task 4fc612e4-9d7d-4441-99cb-e84dc0e48a96 (network) was prepared for execution. 2025-07-04 17:54:33.928503 | orchestrator | 2025-07-04 17:54:33 | INFO  | It takes a moment until task 4fc612e4-9d7d-4441-99cb-e84dc0e48a96 (network) has been started and output is visible here. 2025-07-04 17:54:38.379617 | orchestrator | 2025-07-04 17:54:38.382791 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-07-04 17:54:38.382883 | orchestrator | 2025-07-04 17:54:38.382893 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-07-04 17:54:38.382901 | orchestrator | Friday 04 July 2025 17:54:38 +0000 (0:00:00.298) 0:00:00.298 *********** 2025-07-04 17:54:38.538430 | orchestrator | ok: [testbed-manager] 2025-07-04 17:54:38.618513 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:54:38.697702 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:54:38.772970 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:54:38.967210 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:54:39.104575 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:54:39.105611 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:54:39.106689 | orchestrator | 2025-07-04 17:54:39.109351 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-07-04 17:54:39.109375 | orchestrator | Friday 04 July 2025 17:54:39 +0000 (0:00:00.723) 0:00:01.022 *********** 2025-07-04 17:54:40.317851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 17:54:40.318341 | orchestrator | 2025-07-04 17:54:40.319652 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-07-04 17:54:40.320605 | orchestrator | Friday 04 July 2025 17:54:40 +0000 (0:00:01.212) 0:00:02.235 *********** 2025-07-04 17:54:42.457383 | orchestrator | ok: [testbed-manager] 2025-07-04 17:54:42.457799 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:54:42.460690 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:54:42.460842 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:54:42.460938 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:54:42.462252 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:54:42.462904 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:54:42.463729 | orchestrator | 2025-07-04 17:54:42.464332 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-07-04 17:54:42.464920 | orchestrator | Friday 04 July 2025 17:54:42 +0000 (0:00:02.141) 0:00:04.377 *********** 2025-07-04 17:54:44.253879 | orchestrator | ok: [testbed-manager] 2025-07-04 17:54:44.256935 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:54:44.261287 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:54:44.261388 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:54:44.261403 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:54:44.261422 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:54:44.262285 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:54:44.263199 | orchestrator | 2025-07-04 17:54:44.263477 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-07-04 17:54:44.263885 | orchestrator | Friday 04 July 2025 17:54:44 +0000 (0:00:01.793) 0:00:06.170 *********** 2025-07-04 17:54:44.838666 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-07-04 17:54:44.839876 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-07-04 17:54:44.843673 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-07-04 17:54:45.306218 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-07-04 17:54:45.307074 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-07-04 17:54:45.311514 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-07-04 17:54:45.311548 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-07-04 17:54:45.311558 | orchestrator | 2025-07-04 17:54:45.311569 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-07-04 17:54:45.311580 | orchestrator | Friday 04 July 2025 17:54:45 +0000 (0:00:01.056) 0:00:07.226 *********** 2025-07-04 17:54:48.867123 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-04 17:54:48.867254 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 17:54:48.867271 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-04 17:54:48.867348 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-04 17:54:48.868449 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-04 17:54:48.868475 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-04 17:54:48.870894 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-04 17:54:48.872373 | orchestrator | 2025-07-04 17:54:48.874492 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-07-04 17:54:48.875659 | orchestrator | Friday 04 July 2025 17:54:48 +0000 (0:00:03.557) 0:00:10.783 *********** 2025-07-04 17:54:50.337282 | orchestrator | changed: [testbed-manager] 2025-07-04 17:54:50.340610 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:54:50.340664 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:54:50.342964 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:54:50.344172 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:54:50.345015 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:54:50.346097 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:54:50.346915 | orchestrator | 2025-07-04 17:54:50.347842 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-07-04 17:54:50.348739 | orchestrator | Friday 04 July 2025 17:54:50 +0000 (0:00:01.470) 0:00:12.254 *********** 2025-07-04 17:54:52.182450 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-04 17:54:52.183454 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 17:54:52.183838 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-04 17:54:52.185734 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-04 17:54:52.186446 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-04 17:54:52.187450 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-04 17:54:52.188029 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-04 17:54:52.188603 | orchestrator | 2025-07-04 17:54:52.189237 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-07-04 17:54:52.190455 | orchestrator | Friday 04 July 2025 17:54:52 +0000 (0:00:01.847) 0:00:14.102 *********** 2025-07-04 17:54:52.624021 | orchestrator | ok: [testbed-manager] 2025-07-04 17:54:52.917208 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:54:53.346253 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:54:53.347629 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:54:53.350496 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:54:53.350753 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:54:53.352170 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:54:53.353295 | orchestrator | 2025-07-04 17:54:53.354545 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-07-04 17:54:53.355695 | orchestrator | Friday 04 July 2025 17:54:53 +0000 (0:00:01.160) 0:00:15.263 *********** 2025-07-04 17:54:53.512133 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:54:53.598918 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:54:53.693730 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:54:53.790439 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:54:53.890719 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:54:54.050917 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:54:54.052733 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:54:54.053953 | orchestrator | 2025-07-04 17:54:54.055556 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-07-04 17:54:54.055596 | orchestrator | Friday 04 July 2025 17:54:54 +0000 (0:00:00.708) 0:00:15.971 *********** 2025-07-04 17:54:56.146286 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:54:56.146423 | orchestrator | ok: [testbed-manager] 2025-07-04 17:54:56.148734 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:54:56.148836 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:54:56.150747 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:54:56.151407 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:54:56.152047 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:54:56.153156 | orchestrator | 2025-07-04 17:54:56.153362 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-07-04 17:54:56.155885 | orchestrator | Friday 04 July 2025 17:54:56 +0000 (0:00:02.089) 0:00:18.061 *********** 2025-07-04 17:54:56.408352 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:54:56.491162 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:54:56.575171 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:54:56.656642 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:54:57.022414 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:54:57.022562 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:54:57.022583 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-07-04 17:54:57.022684 | orchestrator | 2025-07-04 17:54:57.023937 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-07-04 17:54:57.024695 | orchestrator | Friday 04 July 2025 17:54:57 +0000 (0:00:00.881) 0:00:18.942 *********** 2025-07-04 17:54:58.701263 | orchestrator | ok: [testbed-manager] 2025-07-04 17:54:58.704050 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:54:58.707012 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:54:58.708170 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:54:58.708744 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:54:58.709558 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:54:58.710581 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:54:58.710772 | orchestrator | 2025-07-04 17:54:58.711454 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-07-04 17:54:58.712253 | orchestrator | Friday 04 July 2025 17:54:58 +0000 (0:00:01.675) 0:00:20.617 *********** 2025-07-04 17:55:00.093211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 17:55:00.093437 | orchestrator | 2025-07-04 17:55:00.095544 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-04 17:55:00.096523 | orchestrator | Friday 04 July 2025 17:55:00 +0000 (0:00:01.392) 0:00:22.010 *********** 2025-07-04 17:55:00.685457 | orchestrator | ok: [testbed-manager] 2025-07-04 17:55:01.127103 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:55:01.128904 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:55:01.132717 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:55:01.132770 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:55:01.132817 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:55:01.132829 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:55:01.133808 | orchestrator | 2025-07-04 17:55:01.134716 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-07-04 17:55:01.135582 | orchestrator | Friday 04 July 2025 17:55:01 +0000 (0:00:01.036) 0:00:23.047 *********** 2025-07-04 17:55:01.486084 | orchestrator | ok: [testbed-manager] 2025-07-04 17:55:01.572924 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:55:01.656941 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:55:01.744606 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:55:01.830177 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:55:01.978229 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:55:01.979275 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:55:01.980544 | orchestrator | 2025-07-04 17:55:01.981519 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-04 17:55:01.984556 | orchestrator | Friday 04 July 2025 17:55:01 +0000 (0:00:00.853) 0:00:23.900 *********** 2025-07-04 17:55:02.401264 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-04 17:55:02.402475 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-07-04 17:55:02.717585 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-04 17:55:02.718403 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-07-04 17:55:02.719526 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-04 17:55:02.720344 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-07-04 17:55:02.721360 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-04 17:55:02.722931 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-07-04 17:55:02.723187 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-04 17:55:02.723968 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-07-04 17:55:03.207143 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-04 17:55:03.207548 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-07-04 17:55:03.210559 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-04 17:55:03.210620 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-07-04 17:55:03.210634 | orchestrator | 2025-07-04 17:55:03.211292 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-07-04 17:55:03.212517 | orchestrator | Friday 04 July 2025 17:55:03 +0000 (0:00:01.222) 0:00:25.123 *********** 2025-07-04 17:55:03.370314 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:55:03.453945 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:55:03.535487 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:55:03.614389 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:55:03.720576 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:55:03.854341 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:55:03.855492 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:55:03.856820 | orchestrator | 2025-07-04 17:55:03.858222 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-07-04 17:55:03.859287 | orchestrator | Friday 04 July 2025 17:55:03 +0000 (0:00:00.650) 0:00:25.773 *********** 2025-07-04 17:55:08.375665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-3, testbed-node-2, testbed-node-5, testbed-node-4 2025-07-04 17:55:08.376009 | orchestrator | 2025-07-04 17:55:08.380946 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-07-04 17:55:08.382410 | orchestrator | Friday 04 July 2025 17:55:08 +0000 (0:00:04.518) 0:00:30.292 *********** 2025-07-04 17:55:14.250981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:14.251217 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:14.252821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:14.255350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:14.256739 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:14.259592 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:14.260609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:14.261325 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:14.262481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:14.264550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:14.267628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:14.267673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:14.268955 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:14.269301 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:14.272130 | orchestrator | 2025-07-04 17:55:14.272503 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-07-04 17:55:14.274367 | orchestrator | Friday 04 July 2025 17:55:14 +0000 (0:00:05.874) 0:00:36.166 *********** 2025-07-04 17:55:20.426418 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:20.427041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:20.428488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:20.429059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:20.429953 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:20.431840 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:20.432424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:20.433241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:20.433955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-04 17:55:20.434500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:20.435347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:20.436040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:20.436821 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:20.437420 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-04 17:55:20.438125 | orchestrator | 2025-07-04 17:55:20.438799 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-07-04 17:55:20.439108 | orchestrator | Friday 04 July 2025 17:55:20 +0000 (0:00:06.180) 0:00:42.346 *********** 2025-07-04 17:55:21.557938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 17:55:21.559465 | orchestrator | 2025-07-04 17:55:21.560900 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-04 17:55:21.561903 | orchestrator | Friday 04 July 2025 17:55:21 +0000 (0:00:01.129) 0:00:43.476 *********** 2025-07-04 17:55:22.003394 | orchestrator | ok: [testbed-manager] 2025-07-04 17:55:22.164634 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:55:22.577001 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:55:22.577122 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:55:22.578341 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:55:22.579939 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:55:22.581172 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:55:22.582481 | orchestrator | 2025-07-04 17:55:22.584217 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-04 17:55:22.585308 | orchestrator | Friday 04 July 2025 17:55:22 +0000 (0:00:01.018) 0:00:44.495 *********** 2025-07-04 17:55:22.653982 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-04 17:55:22.654674 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-04 17:55:22.656219 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-04 17:55:22.755594 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-04 17:55:22.757397 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-04 17:55:22.757840 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-04 17:55:22.758970 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-04 17:55:22.760246 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-04 17:55:22.835727 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:55:22.836117 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-04 17:55:22.836206 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-04 17:55:22.837176 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-04 17:55:22.948772 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:55:22.949881 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-04 17:55:22.951054 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-04 17:55:22.952016 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-04 17:55:22.953026 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-04 17:55:22.954245 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-04 17:55:23.037459 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:55:23.037556 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-04 17:55:23.037896 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-04 17:55:23.039222 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-04 17:55:23.040105 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-04 17:55:23.302331 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:55:23.302925 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-04 17:55:23.304605 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-04 17:55:23.305965 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-04 17:55:23.307363 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-04 17:55:24.627130 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:55:24.630640 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:55:24.630729 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-04 17:55:24.630769 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-04 17:55:24.631530 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-04 17:55:24.633188 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-04 17:55:24.633909 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:55:24.635262 | orchestrator | 2025-07-04 17:55:24.636234 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-07-04 17:55:24.637235 | orchestrator | Friday 04 July 2025 17:55:24 +0000 (0:00:02.048) 0:00:46.544 *********** 2025-07-04 17:55:24.795962 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:55:24.880242 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:55:24.959244 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:55:25.045493 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:55:25.131939 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:55:25.256697 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:55:25.257932 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:55:25.258954 | orchestrator | 2025-07-04 17:55:25.259827 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-07-04 17:55:25.260946 | orchestrator | Friday 04 July 2025 17:55:25 +0000 (0:00:00.634) 0:00:47.178 *********** 2025-07-04 17:55:25.424099 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:55:25.697676 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:55:25.778172 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:55:25.892124 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:55:25.992975 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:55:26.035205 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:55:26.035500 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:55:26.036482 | orchestrator | 2025-07-04 17:55:26.037372 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:55:26.038316 | orchestrator | 2025-07-04 17:55:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:55:26.038355 | orchestrator | 2025-07-04 17:55:26 | INFO  | Please wait and do not abort execution. 2025-07-04 17:55:26.039277 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 17:55:26.039841 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-04 17:55:26.040365 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-04 17:55:26.041441 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-04 17:55:26.042430 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-04 17:55:26.043294 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-04 17:55:26.044491 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-04 17:55:26.044722 | orchestrator | 2025-07-04 17:55:26.045035 | orchestrator | 2025-07-04 17:55:26.045506 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:55:26.047150 | orchestrator | Friday 04 July 2025 17:55:26 +0000 (0:00:00.778) 0:00:47.957 *********** 2025-07-04 17:55:26.047540 | orchestrator | =============================================================================== 2025-07-04 17:55:26.048005 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.18s 2025-07-04 17:55:26.048565 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.87s 2025-07-04 17:55:26.049171 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.52s 2025-07-04 17:55:26.049674 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.56s 2025-07-04 17:55:26.050309 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.14s 2025-07-04 17:55:26.050970 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.09s 2025-07-04 17:55:26.051279 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.05s 2025-07-04 17:55:26.051999 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.85s 2025-07-04 17:55:26.052514 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.79s 2025-07-04 17:55:26.052841 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.68s 2025-07-04 17:55:26.053637 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.47s 2025-07-04 17:55:26.053800 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.39s 2025-07-04 17:55:26.054319 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.22s 2025-07-04 17:55:26.055958 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.21s 2025-07-04 17:55:26.057112 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.16s 2025-07-04 17:55:26.058105 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.13s 2025-07-04 17:55:26.059274 | orchestrator | osism.commons.network : Create required directories --------------------- 1.06s 2025-07-04 17:55:26.059843 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.04s 2025-07-04 17:55:26.060435 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.02s 2025-07-04 17:55:26.061093 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.88s 2025-07-04 17:55:26.737711 | orchestrator | + osism apply wireguard 2025-07-04 17:55:28.505653 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:55:28.505784 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:55:28.505799 | orchestrator | Registering Redlock._release_script 2025-07-04 17:55:28.568249 | orchestrator | 2025-07-04 17:55:28 | INFO  | Task 781084ee-5b9b-4496-81de-4976c3ac6928 (wireguard) was prepared for execution. 2025-07-04 17:55:28.568442 | orchestrator | 2025-07-04 17:55:28 | INFO  | It takes a moment until task 781084ee-5b9b-4496-81de-4976c3ac6928 (wireguard) has been started and output is visible here. 2025-07-04 17:55:32.658238 | orchestrator | 2025-07-04 17:55:32.661624 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-07-04 17:55:32.662075 | orchestrator | 2025-07-04 17:55:32.662395 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-07-04 17:55:32.663808 | orchestrator | Friday 04 July 2025 17:55:32 +0000 (0:00:00.226) 0:00:00.226 *********** 2025-07-04 17:55:34.424281 | orchestrator | ok: [testbed-manager] 2025-07-04 17:55:34.424395 | orchestrator | 2025-07-04 17:55:34.425081 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-07-04 17:55:34.425516 | orchestrator | Friday 04 July 2025 17:55:34 +0000 (0:00:01.766) 0:00:01.993 *********** 2025-07-04 17:55:40.980065 | orchestrator | changed: [testbed-manager] 2025-07-04 17:55:40.982316 | orchestrator | 2025-07-04 17:55:40.983128 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-07-04 17:55:40.983444 | orchestrator | Friday 04 July 2025 17:55:40 +0000 (0:00:06.557) 0:00:08.550 *********** 2025-07-04 17:55:41.563376 | orchestrator | changed: [testbed-manager] 2025-07-04 17:55:41.565523 | orchestrator | 2025-07-04 17:55:41.566268 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-07-04 17:55:41.568162 | orchestrator | Friday 04 July 2025 17:55:41 +0000 (0:00:00.584) 0:00:09.135 *********** 2025-07-04 17:55:42.029045 | orchestrator | changed: [testbed-manager] 2025-07-04 17:55:42.030294 | orchestrator | 2025-07-04 17:55:42.030344 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-07-04 17:55:42.031743 | orchestrator | Friday 04 July 2025 17:55:42 +0000 (0:00:00.463) 0:00:09.598 *********** 2025-07-04 17:55:42.562653 | orchestrator | ok: [testbed-manager] 2025-07-04 17:55:42.563664 | orchestrator | 2025-07-04 17:55:42.564412 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-07-04 17:55:42.566844 | orchestrator | Friday 04 July 2025 17:55:42 +0000 (0:00:00.535) 0:00:10.134 *********** 2025-07-04 17:55:43.120440 | orchestrator | ok: [testbed-manager] 2025-07-04 17:55:43.121838 | orchestrator | 2025-07-04 17:55:43.122425 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-07-04 17:55:43.123828 | orchestrator | Friday 04 July 2025 17:55:43 +0000 (0:00:00.555) 0:00:10.690 *********** 2025-07-04 17:55:43.536049 | orchestrator | ok: [testbed-manager] 2025-07-04 17:55:43.536207 | orchestrator | 2025-07-04 17:55:43.537576 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-07-04 17:55:43.539427 | orchestrator | Friday 04 July 2025 17:55:43 +0000 (0:00:00.416) 0:00:11.106 *********** 2025-07-04 17:55:44.766668 | orchestrator | changed: [testbed-manager] 2025-07-04 17:55:44.767477 | orchestrator | 2025-07-04 17:55:44.768065 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-07-04 17:55:44.769014 | orchestrator | Friday 04 July 2025 17:55:44 +0000 (0:00:01.230) 0:00:12.336 *********** 2025-07-04 17:55:45.738795 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-04 17:55:45.739526 | orchestrator | changed: [testbed-manager] 2025-07-04 17:55:45.740787 | orchestrator | 2025-07-04 17:55:45.741620 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-07-04 17:55:45.742614 | orchestrator | Friday 04 July 2025 17:55:45 +0000 (0:00:00.969) 0:00:13.306 *********** 2025-07-04 17:55:47.401666 | orchestrator | changed: [testbed-manager] 2025-07-04 17:55:47.403208 | orchestrator | 2025-07-04 17:55:47.405113 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-07-04 17:55:47.406771 | orchestrator | Friday 04 July 2025 17:55:47 +0000 (0:00:01.664) 0:00:14.971 *********** 2025-07-04 17:55:48.343074 | orchestrator | changed: [testbed-manager] 2025-07-04 17:55:48.343208 | orchestrator | 2025-07-04 17:55:48.344686 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:55:48.345509 | orchestrator | 2025-07-04 17:55:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:55:48.346460 | orchestrator | 2025-07-04 17:55:48 | INFO  | Please wait and do not abort execution. 2025-07-04 17:55:48.348626 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 17:55:48.349756 | orchestrator | 2025-07-04 17:55:48.351231 | orchestrator | 2025-07-04 17:55:48.351610 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:55:48.352504 | orchestrator | Friday 04 July 2025 17:55:48 +0000 (0:00:00.942) 0:00:15.914 *********** 2025-07-04 17:55:48.353255 | orchestrator | =============================================================================== 2025-07-04 17:55:48.354230 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.56s 2025-07-04 17:55:48.354440 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.77s 2025-07-04 17:55:48.354925 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.66s 2025-07-04 17:55:48.355427 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.23s 2025-07-04 17:55:48.355904 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2025-07-04 17:55:48.356321 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-07-04 17:55:48.356673 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2025-07-04 17:55:48.357740 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.56s 2025-07-04 17:55:48.357949 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2025-07-04 17:55:48.358265 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2025-07-04 17:55:48.358651 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-07-04 17:55:48.934095 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-07-04 17:55:48.977451 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-07-04 17:55:48.977553 | orchestrator | Dload Upload Total Spent Left Speed 2025-07-04 17:55:49.053755 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 196 0 --:--:-- --:--:-- --:--:-- 197 2025-07-04 17:55:49.068951 | orchestrator | + osism apply --environment custom workarounds 2025-07-04 17:55:50.792791 | orchestrator | 2025-07-04 17:55:50 | INFO  | Trying to run play workarounds in environment custom 2025-07-04 17:55:50.797760 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:55:50.797827 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:55:50.797838 | orchestrator | Registering Redlock._release_script 2025-07-04 17:55:50.859215 | orchestrator | 2025-07-04 17:55:50 | INFO  | Task e44e4e6d-e94c-4505-84a8-a552ab5a02dc (workarounds) was prepared for execution. 2025-07-04 17:55:50.859301 | orchestrator | 2025-07-04 17:55:50 | INFO  | It takes a moment until task e44e4e6d-e94c-4505-84a8-a552ab5a02dc (workarounds) has been started and output is visible here. 2025-07-04 17:55:55.071211 | orchestrator | 2025-07-04 17:55:55.072388 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 17:55:55.074987 | orchestrator | 2025-07-04 17:55:55.075062 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-07-04 17:55:55.075086 | orchestrator | Friday 04 July 2025 17:55:55 +0000 (0:00:00.150) 0:00:00.150 *********** 2025-07-04 17:55:55.240608 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-07-04 17:55:55.327146 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-07-04 17:55:55.411762 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-07-04 17:55:55.498004 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-07-04 17:55:55.693521 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-07-04 17:55:55.865769 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-07-04 17:55:55.867180 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-07-04 17:55:55.867743 | orchestrator | 2025-07-04 17:55:55.868830 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-07-04 17:55:55.869579 | orchestrator | 2025-07-04 17:55:55.870361 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-04 17:55:55.871093 | orchestrator | Friday 04 July 2025 17:55:55 +0000 (0:00:00.797) 0:00:00.948 *********** 2025-07-04 17:55:58.256780 | orchestrator | ok: [testbed-manager] 2025-07-04 17:55:58.260038 | orchestrator | 2025-07-04 17:55:58.260116 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-07-04 17:55:58.260302 | orchestrator | 2025-07-04 17:55:58.261513 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-04 17:55:58.262192 | orchestrator | Friday 04 July 2025 17:55:58 +0000 (0:00:02.387) 0:00:03.335 *********** 2025-07-04 17:56:00.188613 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:56:00.192532 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:56:00.192605 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:56:00.192627 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:56:00.192647 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:56:00.192664 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:56:00.193306 | orchestrator | 2025-07-04 17:56:00.195337 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-07-04 17:56:00.195935 | orchestrator | 2025-07-04 17:56:00.196803 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-07-04 17:56:00.197182 | orchestrator | Friday 04 July 2025 17:56:00 +0000 (0:00:01.933) 0:00:05.269 *********** 2025-07-04 17:56:01.692193 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-04 17:56:01.693715 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-04 17:56:01.695302 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-04 17:56:01.696923 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-04 17:56:01.697607 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-04 17:56:01.698432 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-04 17:56:01.700000 | orchestrator | 2025-07-04 17:56:01.700496 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-07-04 17:56:01.701090 | orchestrator | Friday 04 July 2025 17:56:01 +0000 (0:00:01.501) 0:00:06.770 *********** 2025-07-04 17:56:05.511125 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:56:05.511238 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:56:05.512821 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:56:05.512999 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:56:05.513870 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:56:05.515864 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:56:05.516460 | orchestrator | 2025-07-04 17:56:05.517179 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-07-04 17:56:05.517872 | orchestrator | Friday 04 July 2025 17:56:05 +0000 (0:00:03.821) 0:00:10.591 *********** 2025-07-04 17:56:05.669107 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:56:05.749891 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:56:05.828832 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:56:05.907341 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:56:06.254285 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:56:06.255838 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:56:06.256874 | orchestrator | 2025-07-04 17:56:06.257319 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-07-04 17:56:06.260531 | orchestrator | 2025-07-04 17:56:06.260583 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-07-04 17:56:06.260595 | orchestrator | Friday 04 July 2025 17:56:06 +0000 (0:00:00.745) 0:00:11.337 *********** 2025-07-04 17:56:07.909098 | orchestrator | changed: [testbed-manager] 2025-07-04 17:56:07.910180 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:56:07.911427 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:56:07.913828 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:56:07.914560 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:56:07.916081 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:56:07.916757 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:56:07.918432 | orchestrator | 2025-07-04 17:56:07.919479 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-07-04 17:56:07.920643 | orchestrator | Friday 04 July 2025 17:56:07 +0000 (0:00:01.651) 0:00:12.988 *********** 2025-07-04 17:56:09.607436 | orchestrator | changed: [testbed-manager] 2025-07-04 17:56:09.609204 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:56:09.610817 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:56:09.611867 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:56:09.613190 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:56:09.613285 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:56:09.615358 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:56:09.615455 | orchestrator | 2025-07-04 17:56:09.615526 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-07-04 17:56:09.616896 | orchestrator | Friday 04 July 2025 17:56:09 +0000 (0:00:01.693) 0:00:14.682 *********** 2025-07-04 17:56:11.122245 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:56:11.127377 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:56:11.127556 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:56:11.127733 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:56:11.129070 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:56:11.129759 | orchestrator | ok: [testbed-manager] 2025-07-04 17:56:11.129881 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:56:11.133740 | orchestrator | 2025-07-04 17:56:11.134266 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-07-04 17:56:11.135017 | orchestrator | Friday 04 July 2025 17:56:11 +0000 (0:00:01.518) 0:00:16.201 *********** 2025-07-04 17:56:12.931409 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:56:12.932339 | orchestrator | changed: [testbed-manager] 2025-07-04 17:56:12.933667 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:56:12.935840 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:56:12.936705 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:56:12.938833 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:56:12.939517 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:56:12.940515 | orchestrator | 2025-07-04 17:56:12.941859 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-07-04 17:56:12.942726 | orchestrator | Friday 04 July 2025 17:56:12 +0000 (0:00:01.806) 0:00:18.007 *********** 2025-07-04 17:56:13.099065 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:56:13.183075 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:56:13.268190 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:56:13.369707 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:56:13.452428 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:56:13.587073 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:56:13.587428 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:56:13.588701 | orchestrator | 2025-07-04 17:56:13.593464 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-07-04 17:56:13.594087 | orchestrator | 2025-07-04 17:56:13.595686 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-07-04 17:56:13.596602 | orchestrator | Friday 04 July 2025 17:56:13 +0000 (0:00:00.659) 0:00:18.667 *********** 2025-07-04 17:56:16.393313 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:56:16.394379 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:56:16.397159 | orchestrator | ok: [testbed-manager] 2025-07-04 17:56:16.398812 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:56:16.399534 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:56:16.400358 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:56:16.401668 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:56:16.402885 | orchestrator | 2025-07-04 17:56:16.405230 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:56:16.405305 | orchestrator | 2025-07-04 17:56:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:56:16.405322 | orchestrator | 2025-07-04 17:56:16 | INFO  | Please wait and do not abort execution. 2025-07-04 17:56:16.405383 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:56:16.406920 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:56:16.407480 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:56:16.408013 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:56:16.408604 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:56:16.409413 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:56:16.409751 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:56:16.410323 | orchestrator | 2025-07-04 17:56:16.411052 | orchestrator | 2025-07-04 17:56:16.412033 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:56:16.412438 | orchestrator | Friday 04 July 2025 17:56:16 +0000 (0:00:02.805) 0:00:21.472 *********** 2025-07-04 17:56:16.413394 | orchestrator | =============================================================================== 2025-07-04 17:56:16.414463 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.82s 2025-07-04 17:56:16.415526 | orchestrator | Install python3-docker -------------------------------------------------- 2.81s 2025-07-04 17:56:16.416259 | orchestrator | Apply netplan configuration --------------------------------------------- 2.39s 2025-07-04 17:56:16.417164 | orchestrator | Apply netplan configuration --------------------------------------------- 1.93s 2025-07-04 17:56:16.417836 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.81s 2025-07-04 17:56:16.418585 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.69s 2025-07-04 17:56:16.419227 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2025-07-04 17:56:16.419895 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.52s 2025-07-04 17:56:16.420591 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.50s 2025-07-04 17:56:16.420981 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.80s 2025-07-04 17:56:16.421673 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.75s 2025-07-04 17:56:16.421907 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2025-07-04 17:56:17.040263 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-07-04 17:56:18.740220 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:56:18.740351 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:56:18.740376 | orchestrator | Registering Redlock._release_script 2025-07-04 17:56:18.812716 | orchestrator | 2025-07-04 17:56:18 | INFO  | Task 5648eadd-8a62-4607-8b27-af4a1b4bd66e (reboot) was prepared for execution. 2025-07-04 17:56:18.812835 | orchestrator | 2025-07-04 17:56:18 | INFO  | It takes a moment until task 5648eadd-8a62-4607-8b27-af4a1b4bd66e (reboot) has been started and output is visible here. 2025-07-04 17:56:22.896862 | orchestrator | 2025-07-04 17:56:22.897657 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-04 17:56:22.899028 | orchestrator | 2025-07-04 17:56:22.899535 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-04 17:56:22.901209 | orchestrator | Friday 04 July 2025 17:56:22 +0000 (0:00:00.213) 0:00:00.213 *********** 2025-07-04 17:56:22.999590 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:56:22.999971 | orchestrator | 2025-07-04 17:56:23.002372 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-04 17:56:23.002704 | orchestrator | Friday 04 July 2025 17:56:22 +0000 (0:00:00.105) 0:00:00.318 *********** 2025-07-04 17:56:23.955223 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:56:23.957492 | orchestrator | 2025-07-04 17:56:23.959073 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-04 17:56:23.960323 | orchestrator | Friday 04 July 2025 17:56:23 +0000 (0:00:00.955) 0:00:01.274 *********** 2025-07-04 17:56:24.093972 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:56:24.094140 | orchestrator | 2025-07-04 17:56:24.094899 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-04 17:56:24.094925 | orchestrator | 2025-07-04 17:56:24.094938 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-04 17:56:24.097290 | orchestrator | Friday 04 July 2025 17:56:24 +0000 (0:00:00.134) 0:00:01.408 *********** 2025-07-04 17:56:24.211000 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:56:24.211911 | orchestrator | 2025-07-04 17:56:24.214177 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-04 17:56:24.214273 | orchestrator | Friday 04 July 2025 17:56:24 +0000 (0:00:00.121) 0:00:01.530 *********** 2025-07-04 17:56:24.885114 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:56:24.887143 | orchestrator | 2025-07-04 17:56:24.888531 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-04 17:56:24.889222 | orchestrator | Friday 04 July 2025 17:56:24 +0000 (0:00:00.674) 0:00:02.204 *********** 2025-07-04 17:56:25.019228 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:56:25.020819 | orchestrator | 2025-07-04 17:56:25.021208 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-04 17:56:25.022606 | orchestrator | 2025-07-04 17:56:25.023809 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-04 17:56:25.024876 | orchestrator | Friday 04 July 2025 17:56:25 +0000 (0:00:00.132) 0:00:02.337 *********** 2025-07-04 17:56:25.236624 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:56:25.238991 | orchestrator | 2025-07-04 17:56:25.239038 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-04 17:56:25.240770 | orchestrator | Friday 04 July 2025 17:56:25 +0000 (0:00:00.218) 0:00:02.556 *********** 2025-07-04 17:56:25.898930 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:56:25.899863 | orchestrator | 2025-07-04 17:56:25.901279 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-04 17:56:25.902122 | orchestrator | Friday 04 July 2025 17:56:25 +0000 (0:00:00.662) 0:00:03.218 *********** 2025-07-04 17:56:26.012890 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:56:26.014692 | orchestrator | 2025-07-04 17:56:26.015655 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-04 17:56:26.016303 | orchestrator | 2025-07-04 17:56:26.017307 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-04 17:56:26.018069 | orchestrator | Friday 04 July 2025 17:56:26 +0000 (0:00:00.110) 0:00:03.329 *********** 2025-07-04 17:56:26.106452 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:56:26.107693 | orchestrator | 2025-07-04 17:56:26.110127 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-04 17:56:26.110580 | orchestrator | Friday 04 July 2025 17:56:26 +0000 (0:00:00.097) 0:00:03.426 *********** 2025-07-04 17:56:26.774480 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:56:26.775524 | orchestrator | 2025-07-04 17:56:26.776740 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-04 17:56:26.777606 | orchestrator | Friday 04 July 2025 17:56:26 +0000 (0:00:00.667) 0:00:04.093 *********** 2025-07-04 17:56:26.889761 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:56:26.889925 | orchestrator | 2025-07-04 17:56:26.890883 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-04 17:56:26.891760 | orchestrator | 2025-07-04 17:56:26.893207 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-04 17:56:26.893335 | orchestrator | Friday 04 July 2025 17:56:26 +0000 (0:00:00.113) 0:00:04.207 *********** 2025-07-04 17:56:27.007189 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:56:27.007503 | orchestrator | 2025-07-04 17:56:27.008387 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-04 17:56:27.010102 | orchestrator | Friday 04 July 2025 17:56:26 +0000 (0:00:00.118) 0:00:04.326 *********** 2025-07-04 17:56:27.687811 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:56:27.688851 | orchestrator | 2025-07-04 17:56:27.688889 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-04 17:56:27.689883 | orchestrator | Friday 04 July 2025 17:56:27 +0000 (0:00:00.680) 0:00:05.006 *********** 2025-07-04 17:56:27.842872 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:56:27.845359 | orchestrator | 2025-07-04 17:56:27.845379 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-04 17:56:27.845385 | orchestrator | 2025-07-04 17:56:27.846108 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-04 17:56:27.846129 | orchestrator | Friday 04 July 2025 17:56:27 +0000 (0:00:00.152) 0:00:05.158 *********** 2025-07-04 17:56:27.960341 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:56:27.961446 | orchestrator | 2025-07-04 17:56:27.964328 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-04 17:56:27.964363 | orchestrator | Friday 04 July 2025 17:56:27 +0000 (0:00:00.121) 0:00:05.280 *********** 2025-07-04 17:56:28.637816 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:56:28.638426 | orchestrator | 2025-07-04 17:56:28.639504 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-04 17:56:28.639960 | orchestrator | Friday 04 July 2025 17:56:28 +0000 (0:00:00.675) 0:00:05.955 *********** 2025-07-04 17:56:28.674758 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:56:28.675730 | orchestrator | 2025-07-04 17:56:28.678158 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:56:28.679155 | orchestrator | 2025-07-04 17:56:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:56:28.679177 | orchestrator | 2025-07-04 17:56:28 | INFO  | Please wait and do not abort execution. 2025-07-04 17:56:28.679828 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:56:28.680608 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:56:28.681000 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:56:28.681700 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:56:28.682401 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:56:28.682867 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 17:56:28.683125 | orchestrator | 2025-07-04 17:56:28.683622 | orchestrator | 2025-07-04 17:56:28.683953 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:56:28.684373 | orchestrator | Friday 04 July 2025 17:56:28 +0000 (0:00:00.038) 0:00:05.994 *********** 2025-07-04 17:56:28.684767 | orchestrator | =============================================================================== 2025-07-04 17:56:28.685373 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.32s 2025-07-04 17:56:28.685647 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.78s 2025-07-04 17:56:28.686103 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2025-07-04 17:56:29.253596 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-07-04 17:56:31.208219 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:56:31.208339 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:56:31.208363 | orchestrator | Registering Redlock._release_script 2025-07-04 17:56:31.273229 | orchestrator | 2025-07-04 17:56:31 | INFO  | Task 65598d7b-e9cd-438f-a387-c9e5126bfb0f (wait-for-connection) was prepared for execution. 2025-07-04 17:56:31.273317 | orchestrator | 2025-07-04 17:56:31 | INFO  | It takes a moment until task 65598d7b-e9cd-438f-a387-c9e5126bfb0f (wait-for-connection) has been started and output is visible here. 2025-07-04 17:56:35.548156 | orchestrator | 2025-07-04 17:56:35.548272 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-07-04 17:56:35.551944 | orchestrator | 2025-07-04 17:56:35.552020 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-07-04 17:56:35.552033 | orchestrator | Friday 04 July 2025 17:56:35 +0000 (0:00:00.244) 0:00:00.244 *********** 2025-07-04 17:56:47.343378 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:56:47.343523 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:56:47.343685 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:56:47.344680 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:56:47.345766 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:56:47.347742 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:56:47.348671 | orchestrator | 2025-07-04 17:56:47.349363 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:56:47.349930 | orchestrator | 2025-07-04 17:56:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:56:47.350445 | orchestrator | 2025-07-04 17:56:47 | INFO  | Please wait and do not abort execution. 2025-07-04 17:56:47.351472 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 17:56:47.351994 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 17:56:47.352735 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 17:56:47.353382 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 17:56:47.354112 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 17:56:47.354524 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 17:56:47.355081 | orchestrator | 2025-07-04 17:56:47.355537 | orchestrator | 2025-07-04 17:56:47.356312 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:56:47.357108 | orchestrator | Friday 04 July 2025 17:56:47 +0000 (0:00:11.796) 0:00:12.041 *********** 2025-07-04 17:56:47.357485 | orchestrator | =============================================================================== 2025-07-04 17:56:47.358158 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.80s 2025-07-04 17:56:47.977421 | orchestrator | + osism apply hddtemp 2025-07-04 17:56:49.708252 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:56:49.708351 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:56:49.708365 | orchestrator | Registering Redlock._release_script 2025-07-04 17:56:49.777857 | orchestrator | 2025-07-04 17:56:49 | INFO  | Task 579c6238-ea68-4dee-9333-a253e14c73d5 (hddtemp) was prepared for execution. 2025-07-04 17:56:49.777959 | orchestrator | 2025-07-04 17:56:49 | INFO  | It takes a moment until task 579c6238-ea68-4dee-9333-a253e14c73d5 (hddtemp) has been started and output is visible here. 2025-07-04 17:56:54.117741 | orchestrator | 2025-07-04 17:56:54.119109 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-07-04 17:56:54.120568 | orchestrator | 2025-07-04 17:56:54.122445 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-07-04 17:56:54.123501 | orchestrator | Friday 04 July 2025 17:56:54 +0000 (0:00:00.293) 0:00:00.293 *********** 2025-07-04 17:56:54.303133 | orchestrator | ok: [testbed-manager] 2025-07-04 17:56:54.382761 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:56:54.465498 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:56:54.544958 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:56:54.737906 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:56:54.865962 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:56:54.866606 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:56:54.867388 | orchestrator | 2025-07-04 17:56:54.868607 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-07-04 17:56:54.868866 | orchestrator | Friday 04 July 2025 17:56:54 +0000 (0:00:00.748) 0:00:01.041 *********** 2025-07-04 17:56:56.065796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 17:56:56.065979 | orchestrator | 2025-07-04 17:56:56.066194 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-07-04 17:56:56.066528 | orchestrator | Friday 04 July 2025 17:56:56 +0000 (0:00:01.198) 0:00:02.239 *********** 2025-07-04 17:56:58.029188 | orchestrator | ok: [testbed-manager] 2025-07-04 17:56:58.029554 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:56:58.030793 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:56:58.031411 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:56:58.032219 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:56:58.033910 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:56:58.035204 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:56:58.036019 | orchestrator | 2025-07-04 17:56:58.036729 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-07-04 17:56:58.037816 | orchestrator | Friday 04 July 2025 17:56:58 +0000 (0:00:01.965) 0:00:04.205 *********** 2025-07-04 17:56:58.679544 | orchestrator | changed: [testbed-manager] 2025-07-04 17:56:58.768755 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:56:59.218518 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:56:59.218647 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:56:59.222155 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:56:59.223286 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:56:59.224709 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:56:59.225711 | orchestrator | 2025-07-04 17:56:59.226372 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-07-04 17:56:59.227091 | orchestrator | Friday 04 July 2025 17:56:59 +0000 (0:00:01.186) 0:00:05.391 *********** 2025-07-04 17:57:00.364272 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:57:00.365090 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:57:00.367183 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:57:00.367500 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:57:00.368963 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:57:00.370183 | orchestrator | ok: [testbed-manager] 2025-07-04 17:57:00.371142 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:57:00.373384 | orchestrator | 2025-07-04 17:57:00.373700 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-07-04 17:57:00.374115 | orchestrator | Friday 04 July 2025 17:57:00 +0000 (0:00:01.148) 0:00:06.540 *********** 2025-07-04 17:57:00.819798 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:57:00.906255 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:57:00.984988 | orchestrator | changed: [testbed-manager] 2025-07-04 17:57:01.079642 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:57:01.222152 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:57:01.222722 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:57:01.224358 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:57:01.225391 | orchestrator | 2025-07-04 17:57:01.226800 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-07-04 17:57:01.227792 | orchestrator | Friday 04 July 2025 17:57:01 +0000 (0:00:00.855) 0:00:07.396 *********** 2025-07-04 17:57:14.552203 | orchestrator | changed: [testbed-manager] 2025-07-04 17:57:14.552319 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:57:14.557087 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:57:14.559347 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:57:14.561119 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:57:14.562651 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:57:14.563730 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:57:14.564965 | orchestrator | 2025-07-04 17:57:14.566267 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-07-04 17:57:14.567272 | orchestrator | Friday 04 July 2025 17:57:14 +0000 (0:00:13.330) 0:00:20.726 *********** 2025-07-04 17:57:15.967845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 17:57:15.968672 | orchestrator | 2025-07-04 17:57:15.969847 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-07-04 17:57:15.971129 | orchestrator | Friday 04 July 2025 17:57:15 +0000 (0:00:01.413) 0:00:22.140 *********** 2025-07-04 17:57:18.606264 | orchestrator | changed: [testbed-manager] 2025-07-04 17:57:18.607074 | orchestrator | changed: [testbed-node-2] 2025-07-04 17:57:18.609149 | orchestrator | changed: [testbed-node-0] 2025-07-04 17:57:18.611117 | orchestrator | changed: [testbed-node-3] 2025-07-04 17:57:18.611829 | orchestrator | changed: [testbed-node-4] 2025-07-04 17:57:18.613239 | orchestrator | changed: [testbed-node-5] 2025-07-04 17:57:18.614468 | orchestrator | changed: [testbed-node-1] 2025-07-04 17:57:18.615168 | orchestrator | 2025-07-04 17:57:18.616661 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:57:18.617334 | orchestrator | 2025-07-04 17:57:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:57:18.617365 | orchestrator | 2025-07-04 17:57:18 | INFO  | Please wait and do not abort execution. 2025-07-04 17:57:18.618628 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 17:57:18.619730 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:57:18.620529 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:57:18.620717 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:57:18.622106 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:57:18.622438 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:57:18.623621 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:57:18.623933 | orchestrator | 2025-07-04 17:57:18.624826 | orchestrator | 2025-07-04 17:57:18.625471 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:57:18.626227 | orchestrator | Friday 04 July 2025 17:57:18 +0000 (0:00:02.641) 0:00:24.781 *********** 2025-07-04 17:57:18.626676 | orchestrator | =============================================================================== 2025-07-04 17:57:18.626727 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.33s 2025-07-04 17:57:18.627295 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.64s 2025-07-04 17:57:18.627743 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.97s 2025-07-04 17:57:18.628082 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.41s 2025-07-04 17:57:18.628805 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.20s 2025-07-04 17:57:18.629190 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.19s 2025-07-04 17:57:18.629678 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.15s 2025-07-04 17:57:18.630074 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.86s 2025-07-04 17:57:18.630719 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2025-07-04 17:57:19.281445 | orchestrator | ++ semver 9.1.0 7.1.1 2025-07-04 17:57:19.343332 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-04 17:57:19.343419 | orchestrator | + sudo systemctl restart manager.service 2025-07-04 17:57:33.045460 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-04 17:57:33.045622 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-04 17:57:33.045638 | orchestrator | + local max_attempts=60 2025-07-04 17:57:33.045649 | orchestrator | + local name=ceph-ansible 2025-07-04 17:57:33.045659 | orchestrator | + local attempt_num=1 2025-07-04 17:57:33.045668 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:57:33.083218 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-04 17:57:33.083389 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-04 17:57:33.083408 | orchestrator | + sleep 5 2025-07-04 17:57:38.090267 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:57:38.122073 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-04 17:57:38.122170 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-04 17:57:38.122184 | orchestrator | + sleep 5 2025-07-04 17:57:43.126314 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:57:43.164044 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-04 17:57:43.164142 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-04 17:57:43.164157 | orchestrator | + sleep 5 2025-07-04 17:57:48.167356 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:57:48.211187 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-04 17:57:48.211311 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-04 17:57:48.211328 | orchestrator | + sleep 5 2025-07-04 17:57:53.215125 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:57:53.257128 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-04 17:57:53.257237 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-04 17:57:53.257256 | orchestrator | + sleep 5 2025-07-04 17:57:58.262452 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:57:58.301010 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-04 17:57:58.301106 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-04 17:57:58.301119 | orchestrator | + sleep 5 2025-07-04 17:58:03.306359 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:58:03.350364 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-04 17:58:03.350466 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-04 17:58:03.350482 | orchestrator | + sleep 5 2025-07-04 17:58:08.354324 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:58:08.404263 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-04 17:58:08.404367 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-04 17:58:08.404382 | orchestrator | + sleep 5 2025-07-04 17:58:13.407900 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:58:13.445146 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-04 17:58:13.445229 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-04 17:58:13.445244 | orchestrator | + sleep 5 2025-07-04 17:58:18.449259 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:58:18.490344 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-04 17:58:18.490453 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-04 17:58:18.490505 | orchestrator | + sleep 5 2025-07-04 17:58:23.494679 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:58:23.538546 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-04 17:58:23.538653 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-04 17:58:23.538668 | orchestrator | + sleep 5 2025-07-04 17:58:28.544087 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:58:28.584011 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-04 17:58:28.584118 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-04 17:58:28.584134 | orchestrator | + sleep 5 2025-07-04 17:58:33.589557 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:58:33.628818 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-04 17:58:33.628905 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-04 17:58:33.628914 | orchestrator | + sleep 5 2025-07-04 17:58:38.633816 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-04 17:58:38.677567 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-04 17:58:38.677690 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-04 17:58:38.677747 | orchestrator | + local max_attempts=60 2025-07-04 17:58:38.677762 | orchestrator | + local name=kolla-ansible 2025-07-04 17:58:38.677786 | orchestrator | + local attempt_num=1 2025-07-04 17:58:38.677873 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-04 17:58:38.725476 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-04 17:58:38.725592 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-04 17:58:38.725606 | orchestrator | + local max_attempts=60 2025-07-04 17:58:38.725619 | orchestrator | + local name=osism-ansible 2025-07-04 17:58:38.725631 | orchestrator | + local attempt_num=1 2025-07-04 17:58:38.726184 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-04 17:58:38.757871 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-04 17:58:38.757982 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-04 17:58:38.757997 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-04 17:58:38.950099 | orchestrator | ARA in ceph-ansible already disabled. 2025-07-04 17:58:39.136117 | orchestrator | ARA in kolla-ansible already disabled. 2025-07-04 17:58:39.308992 | orchestrator | ARA in osism-ansible already disabled. 2025-07-04 17:58:39.457730 | orchestrator | ARA in osism-kubernetes already disabled. 2025-07-04 17:58:39.458654 | orchestrator | + osism apply gather-facts 2025-07-04 17:58:41.266634 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:58:41.266754 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:58:41.266768 | orchestrator | Registering Redlock._release_script 2025-07-04 17:58:41.332861 | orchestrator | 2025-07-04 17:58:41 | INFO  | Task 8bc22888-20d1-45fd-9d97-6890e895e5c5 (gather-facts) was prepared for execution. 2025-07-04 17:58:41.332978 | orchestrator | 2025-07-04 17:58:41 | INFO  | It takes a moment until task 8bc22888-20d1-45fd-9d97-6890e895e5c5 (gather-facts) has been started and output is visible here. 2025-07-04 17:58:45.331564 | orchestrator | 2025-07-04 17:58:45.332796 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-04 17:58:45.335366 | orchestrator | 2025-07-04 17:58:45.338242 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-04 17:58:45.339540 | orchestrator | Friday 04 July 2025 17:58:45 +0000 (0:00:00.219) 0:00:00.219 *********** 2025-07-04 17:58:51.124716 | orchestrator | ok: [testbed-manager] 2025-07-04 17:58:51.124776 | orchestrator | ok: [testbed-node-1] 2025-07-04 17:58:51.127351 | orchestrator | ok: [testbed-node-2] 2025-07-04 17:58:51.129787 | orchestrator | ok: [testbed-node-0] 2025-07-04 17:58:51.129963 | orchestrator | ok: [testbed-node-3] 2025-07-04 17:58:51.129991 | orchestrator | ok: [testbed-node-4] 2025-07-04 17:58:51.130003 | orchestrator | ok: [testbed-node-5] 2025-07-04 17:58:51.133947 | orchestrator | 2025-07-04 17:58:51.134003 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-04 17:58:51.134061 | orchestrator | 2025-07-04 17:58:51.134076 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-04 17:58:51.134089 | orchestrator | Friday 04 July 2025 17:58:51 +0000 (0:00:05.796) 0:00:06.015 *********** 2025-07-04 17:58:51.270532 | orchestrator | skipping: [testbed-manager] 2025-07-04 17:58:51.352184 | orchestrator | skipping: [testbed-node-0] 2025-07-04 17:58:51.430306 | orchestrator | skipping: [testbed-node-1] 2025-07-04 17:58:51.508967 | orchestrator | skipping: [testbed-node-2] 2025-07-04 17:58:51.585187 | orchestrator | skipping: [testbed-node-3] 2025-07-04 17:58:51.630609 | orchestrator | skipping: [testbed-node-4] 2025-07-04 17:58:51.632099 | orchestrator | skipping: [testbed-node-5] 2025-07-04 17:58:51.633581 | orchestrator | 2025-07-04 17:58:51.634528 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 17:58:51.635263 | orchestrator | 2025-07-04 17:58:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 17:58:51.635727 | orchestrator | 2025-07-04 17:58:51 | INFO  | Please wait and do not abort execution. 2025-07-04 17:58:51.637019 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:58:51.637983 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:58:51.639036 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:58:51.639895 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:58:51.640354 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:58:51.641231 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:58:51.642298 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 17:58:51.643078 | orchestrator | 2025-07-04 17:58:51.643612 | orchestrator | 2025-07-04 17:58:51.644418 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 17:58:51.645132 | orchestrator | Friday 04 July 2025 17:58:51 +0000 (0:00:00.506) 0:00:06.522 *********** 2025-07-04 17:58:51.645689 | orchestrator | =============================================================================== 2025-07-04 17:58:51.646554 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.80s 2025-07-04 17:58:51.646911 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-07-04 17:58:52.288839 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-07-04 17:58:52.303512 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-07-04 17:58:52.320037 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-07-04 17:58:52.336819 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-07-04 17:58:52.348105 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-07-04 17:58:52.358835 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-07-04 17:58:52.370558 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-07-04 17:58:52.385986 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-07-04 17:58:52.403563 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-07-04 17:58:52.418091 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-07-04 17:58:52.437564 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-07-04 17:58:52.454253 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-07-04 17:58:52.467712 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-07-04 17:58:52.479813 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-07-04 17:58:52.492227 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-07-04 17:58:52.513841 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-07-04 17:58:52.529518 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-07-04 17:58:52.544544 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-07-04 17:58:52.564238 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-07-04 17:58:52.577691 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-07-04 17:58:52.596750 | orchestrator | + [[ false == \t\r\u\e ]] 2025-07-04 17:58:52.709860 | orchestrator | ok: Runtime: 0:20:45.170692 2025-07-04 17:58:52.812406 | 2025-07-04 17:58:52.812591 | TASK [Deploy services] 2025-07-04 17:58:53.346365 | orchestrator | skipping: Conditional result was False 2025-07-04 17:58:53.367177 | 2025-07-04 17:58:53.367384 | TASK [Deploy in a nutshell] 2025-07-04 17:58:54.133973 | orchestrator | 2025-07-04 17:58:54.134105 | orchestrator | # PULL IMAGES 2025-07-04 17:58:54.134116 | orchestrator | 2025-07-04 17:58:54.134122 | orchestrator | + set -e 2025-07-04 17:58:54.134130 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-04 17:58:54.134141 | orchestrator | ++ export INTERACTIVE=false 2025-07-04 17:58:54.134148 | orchestrator | ++ INTERACTIVE=false 2025-07-04 17:58:54.134173 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-04 17:58:54.134184 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-04 17:58:54.134191 | orchestrator | + source /opt/manager-vars.sh 2025-07-04 17:58:54.134197 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-04 17:58:54.134206 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-04 17:58:54.134211 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-04 17:58:54.134220 | orchestrator | ++ CEPH_VERSION=reef 2025-07-04 17:58:54.134225 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-04 17:58:54.134233 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-04 17:58:54.134238 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-04 17:58:54.134245 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-04 17:58:54.134251 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-04 17:58:54.134256 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-04 17:58:54.134261 | orchestrator | ++ export ARA=false 2025-07-04 17:58:54.134266 | orchestrator | ++ ARA=false 2025-07-04 17:58:54.134271 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-04 17:58:54.134276 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-04 17:58:54.134281 | orchestrator | ++ export TEMPEST=false 2025-07-04 17:58:54.134285 | orchestrator | ++ TEMPEST=false 2025-07-04 17:58:54.134290 | orchestrator | ++ export IS_ZUUL=true 2025-07-04 17:58:54.134295 | orchestrator | ++ IS_ZUUL=true 2025-07-04 17:58:54.134300 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-07-04 17:58:54.134305 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-07-04 17:58:54.134310 | orchestrator | ++ export EXTERNAL_API=false 2025-07-04 17:58:54.134314 | orchestrator | ++ EXTERNAL_API=false 2025-07-04 17:58:54.134319 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-04 17:58:54.134324 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-04 17:58:54.134329 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-04 17:58:54.134334 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-04 17:58:54.134338 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-04 17:58:54.134347 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-04 17:58:54.134352 | orchestrator | + echo 2025-07-04 17:58:54.134357 | orchestrator | + echo '# PULL IMAGES' 2025-07-04 17:58:54.134362 | orchestrator | + echo 2025-07-04 17:58:54.134367 | orchestrator | ++ semver 9.1.0 7.0.0 2025-07-04 17:58:54.193222 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-04 17:58:54.193330 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-07-04 17:58:55.934664 | orchestrator | 2025-07-04 17:58:55 | INFO  | Trying to run play pull-images in environment custom 2025-07-04 17:58:55.937482 | orchestrator | Registering Redlock._acquired_script 2025-07-04 17:58:55.937524 | orchestrator | Registering Redlock._extend_script 2025-07-04 17:58:55.937530 | orchestrator | Registering Redlock._release_script 2025-07-04 17:58:56.000253 | orchestrator | 2025-07-04 17:58:55 | INFO  | Task fafddd67-6b7e-4ad3-a55d-41bde7dc61c5 (pull-images) was prepared for execution. 2025-07-04 17:58:56.000349 | orchestrator | 2025-07-04 17:58:55 | INFO  | It takes a moment until task fafddd67-6b7e-4ad3-a55d-41bde7dc61c5 (pull-images) has been started and output is visible here. 2025-07-04 17:59:00.114660 | orchestrator | 2025-07-04 17:59:00.116934 | orchestrator | PLAY [Pull images] ************************************************************* 2025-07-04 17:59:00.117477 | orchestrator | 2025-07-04 17:59:00.119129 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-07-04 17:59:00.120299 | orchestrator | Friday 04 July 2025 17:59:00 +0000 (0:00:00.158) 0:00:00.158 *********** 2025-07-04 18:00:06.211180 | orchestrator | changed: [testbed-manager] 2025-07-04 18:00:06.211304 | orchestrator | 2025-07-04 18:00:06.211768 | orchestrator | TASK [Pull other images] ******************************************************* 2025-07-04 18:00:06.212466 | orchestrator | Friday 04 July 2025 18:00:06 +0000 (0:01:06.101) 0:01:06.260 *********** 2025-07-04 18:01:03.495700 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-07-04 18:01:03.495837 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-07-04 18:01:03.495853 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-07-04 18:01:03.495867 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-07-04 18:01:03.495938 | orchestrator | changed: [testbed-manager] => (item=common) 2025-07-04 18:01:03.497922 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-07-04 18:01:03.499438 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-07-04 18:01:03.501170 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-07-04 18:01:03.501890 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-07-04 18:01:03.502769 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-07-04 18:01:03.503837 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-07-04 18:01:03.504753 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-07-04 18:01:03.505209 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-07-04 18:01:03.505888 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-07-04 18:01:03.506546 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-07-04 18:01:03.507211 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-07-04 18:01:03.508020 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-07-04 18:01:03.508271 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-07-04 18:01:03.508771 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-07-04 18:01:03.509164 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-07-04 18:01:03.510939 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-07-04 18:01:03.511376 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-07-04 18:01:03.511659 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-07-04 18:01:03.512071 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-07-04 18:01:03.512615 | orchestrator | 2025-07-04 18:01:03.512859 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:01:03.513234 | orchestrator | 2025-07-04 18:01:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 18:01:03.513336 | orchestrator | 2025-07-04 18:01:03 | INFO  | Please wait and do not abort execution. 2025-07-04 18:01:03.514242 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:01:03.514279 | orchestrator | 2025-07-04 18:01:03.514924 | orchestrator | 2025-07-04 18:01:03.515904 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:01:03.516635 | orchestrator | Friday 04 July 2025 18:01:03 +0000 (0:00:57.282) 0:02:03.542 *********** 2025-07-04 18:01:03.517577 | orchestrator | =============================================================================== 2025-07-04 18:01:03.518372 | orchestrator | Pull keystone image ---------------------------------------------------- 66.10s 2025-07-04 18:01:03.518942 | orchestrator | Pull other images ------------------------------------------------------ 57.28s 2025-07-04 18:01:06.026343 | orchestrator | 2025-07-04 18:01:06 | INFO  | Trying to run play wipe-partitions in environment custom 2025-07-04 18:01:06.031796 | orchestrator | Registering Redlock._acquired_script 2025-07-04 18:01:06.032062 | orchestrator | Registering Redlock._extend_script 2025-07-04 18:01:06.032092 | orchestrator | Registering Redlock._release_script 2025-07-04 18:01:06.095024 | orchestrator | 2025-07-04 18:01:06 | INFO  | Task 4c89be65-b0cc-4497-9bfb-d9f409e259d4 (wipe-partitions) was prepared for execution. 2025-07-04 18:01:06.095112 | orchestrator | 2025-07-04 18:01:06 | INFO  | It takes a moment until task 4c89be65-b0cc-4497-9bfb-d9f409e259d4 (wipe-partitions) has been started and output is visible here. 2025-07-04 18:01:10.177514 | orchestrator | 2025-07-04 18:01:10.177631 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-07-04 18:01:10.177650 | orchestrator | 2025-07-04 18:01:10.177681 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-07-04 18:01:10.178085 | orchestrator | Friday 04 July 2025 18:01:10 +0000 (0:00:00.132) 0:00:00.132 *********** 2025-07-04 18:01:10.844884 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:01:10.849435 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:01:10.850246 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:01:10.851076 | orchestrator | 2025-07-04 18:01:10.852258 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-07-04 18:01:10.853986 | orchestrator | Friday 04 July 2025 18:01:10 +0000 (0:00:00.668) 0:00:00.800 *********** 2025-07-04 18:01:11.059688 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:11.201409 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:01:11.201513 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:01:11.201751 | orchestrator | 2025-07-04 18:01:11.203144 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-07-04 18:01:11.203690 | orchestrator | Friday 04 July 2025 18:01:11 +0000 (0:00:00.354) 0:00:01.155 *********** 2025-07-04 18:01:12.123903 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:01:12.124114 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:01:12.124140 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:01:12.124379 | orchestrator | 2025-07-04 18:01:12.124937 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-07-04 18:01:12.125337 | orchestrator | Friday 04 July 2025 18:01:12 +0000 (0:00:00.924) 0:00:02.080 *********** 2025-07-04 18:01:12.296818 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:12.420647 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:01:12.420856 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:01:12.421374 | orchestrator | 2025-07-04 18:01:12.425037 | orchestrator | TASK [Check device availability] *********************************************** 2025-07-04 18:01:12.425164 | orchestrator | Friday 04 July 2025 18:01:12 +0000 (0:00:00.294) 0:00:02.375 *********** 2025-07-04 18:01:13.596055 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-04 18:01:13.601704 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-04 18:01:13.602828 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-04 18:01:13.607020 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-04 18:01:13.607104 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-04 18:01:13.607165 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-04 18:01:13.607632 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-04 18:01:13.608060 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-04 18:01:13.608750 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-04 18:01:13.610466 | orchestrator | 2025-07-04 18:01:13.610998 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-07-04 18:01:13.612112 | orchestrator | Friday 04 July 2025 18:01:13 +0000 (0:00:01.176) 0:00:03.551 *********** 2025-07-04 18:01:14.955337 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-07-04 18:01:14.958579 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-07-04 18:01:14.961936 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-07-04 18:01:14.963222 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-07-04 18:01:14.965731 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-07-04 18:01:14.967676 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-07-04 18:01:14.970243 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-07-04 18:01:14.973024 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-07-04 18:01:14.973099 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-07-04 18:01:14.973113 | orchestrator | 2025-07-04 18:01:14.973125 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-07-04 18:01:14.975802 | orchestrator | Friday 04 July 2025 18:01:14 +0000 (0:00:01.358) 0:00:04.909 *********** 2025-07-04 18:01:17.272052 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-04 18:01:17.272569 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-04 18:01:17.273572 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-04 18:01:17.273965 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-04 18:01:17.274595 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-04 18:01:17.275098 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-04 18:01:17.279686 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-04 18:01:17.279725 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-04 18:01:17.279736 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-04 18:01:17.279834 | orchestrator | 2025-07-04 18:01:17.280316 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-07-04 18:01:17.281199 | orchestrator | Friday 04 July 2025 18:01:17 +0000 (0:00:02.321) 0:00:07.230 *********** 2025-07-04 18:01:17.889316 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:01:17.889477 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:01:17.890651 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:01:17.893552 | orchestrator | 2025-07-04 18:01:17.893586 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-07-04 18:01:17.893600 | orchestrator | Friday 04 July 2025 18:01:17 +0000 (0:00:00.615) 0:00:07.846 *********** 2025-07-04 18:01:18.531652 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:01:18.534754 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:01:18.534870 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:01:18.535300 | orchestrator | 2025-07-04 18:01:18.537712 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:01:18.537748 | orchestrator | 2025-07-04 18:01:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 18:01:18.537762 | orchestrator | 2025-07-04 18:01:18 | INFO  | Please wait and do not abort execution. 2025-07-04 18:01:18.538872 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:01:18.539461 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:01:18.540420 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:01:18.540594 | orchestrator | 2025-07-04 18:01:18.541115 | orchestrator | 2025-07-04 18:01:18.541355 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:01:18.541743 | orchestrator | Friday 04 July 2025 18:01:18 +0000 (0:00:00.640) 0:00:08.487 *********** 2025-07-04 18:01:18.542153 | orchestrator | =============================================================================== 2025-07-04 18:01:18.542872 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.32s 2025-07-04 18:01:18.543026 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.36s 2025-07-04 18:01:18.543762 | orchestrator | Check device availability ----------------------------------------------- 1.18s 2025-07-04 18:01:18.543992 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.92s 2025-07-04 18:01:18.545005 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.67s 2025-07-04 18:01:18.545150 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2025-07-04 18:01:18.545825 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2025-07-04 18:01:18.546118 | orchestrator | Remove all rook related logical devices --------------------------------- 0.35s 2025-07-04 18:01:18.546758 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2025-07-04 18:01:20.787419 | orchestrator | Registering Redlock._acquired_script 2025-07-04 18:01:20.787499 | orchestrator | Registering Redlock._extend_script 2025-07-04 18:01:20.787512 | orchestrator | Registering Redlock._release_script 2025-07-04 18:01:20.841305 | orchestrator | 2025-07-04 18:01:20 | INFO  | Task 7fd1ba2a-70e2-4a15-8bc8-cce591058f3d (facts) was prepared for execution. 2025-07-04 18:01:20.841354 | orchestrator | 2025-07-04 18:01:20 | INFO  | It takes a moment until task 7fd1ba2a-70e2-4a15-8bc8-cce591058f3d (facts) has been started and output is visible here. 2025-07-04 18:01:24.677183 | orchestrator | 2025-07-04 18:01:24.677489 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-04 18:01:24.677787 | orchestrator | 2025-07-04 18:01:24.678785 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-04 18:01:24.679160 | orchestrator | Friday 04 July 2025 18:01:24 +0000 (0:00:00.287) 0:00:00.287 *********** 2025-07-04 18:01:25.848725 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:01:25.848850 | orchestrator | ok: [testbed-manager] 2025-07-04 18:01:25.853192 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:01:25.853361 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:01:25.854472 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:01:25.856437 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:01:25.859291 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:01:25.859339 | orchestrator | 2025-07-04 18:01:25.859347 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-04 18:01:25.863153 | orchestrator | Friday 04 July 2025 18:01:25 +0000 (0:00:01.169) 0:00:01.456 *********** 2025-07-04 18:01:26.019968 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:01:26.118003 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:01:26.203019 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:01:26.285020 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:01:26.369995 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:27.212944 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:01:27.213157 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:01:27.217907 | orchestrator | 2025-07-04 18:01:27.218255 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-04 18:01:27.219621 | orchestrator | 2025-07-04 18:01:27.222841 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-04 18:01:27.224482 | orchestrator | Friday 04 July 2025 18:01:27 +0000 (0:00:01.361) 0:00:02.818 *********** 2025-07-04 18:01:31.942351 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:01:31.943910 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:01:31.944914 | orchestrator | ok: [testbed-manager] 2025-07-04 18:01:31.947630 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:01:31.947661 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:01:31.948004 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:01:31.948571 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:01:31.949942 | orchestrator | 2025-07-04 18:01:31.950952 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-04 18:01:31.952189 | orchestrator | 2025-07-04 18:01:31.955237 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-04 18:01:31.956465 | orchestrator | Friday 04 July 2025 18:01:31 +0000 (0:00:04.737) 0:00:07.555 *********** 2025-07-04 18:01:32.146359 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:01:32.224905 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:01:32.298540 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:01:32.393326 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:01:32.482924 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:32.525180 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:01:32.526162 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:01:32.527286 | orchestrator | 2025-07-04 18:01:32.527862 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:01:32.528426 | orchestrator | 2025-07-04 18:01:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 18:01:32.528555 | orchestrator | 2025-07-04 18:01:32 | INFO  | Please wait and do not abort execution. 2025-07-04 18:01:32.529833 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:01:32.529990 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:01:32.531380 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:01:32.531479 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:01:32.532057 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:01:32.533766 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:01:32.533865 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:01:32.533877 | orchestrator | 2025-07-04 18:01:32.534896 | orchestrator | 2025-07-04 18:01:32.534923 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:01:32.535982 | orchestrator | Friday 04 July 2025 18:01:32 +0000 (0:00:00.582) 0:00:08.137 *********** 2025-07-04 18:01:32.536006 | orchestrator | =============================================================================== 2025-07-04 18:01:32.536017 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.74s 2025-07-04 18:01:32.536028 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.36s 2025-07-04 18:01:32.536410 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2025-07-04 18:01:32.536693 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-07-04 18:01:35.194182 | orchestrator | 2025-07-04 18:01:35 | INFO  | Task 834bd922-dfeb-4988-9c36-8f5eef9a5e44 (ceph-configure-lvm-volumes) was prepared for execution. 2025-07-04 18:01:35.194304 | orchestrator | 2025-07-04 18:01:35 | INFO  | It takes a moment until task 834bd922-dfeb-4988-9c36-8f5eef9a5e44 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-07-04 18:01:39.921420 | orchestrator | 2025-07-04 18:01:39.921910 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-04 18:01:39.922831 | orchestrator | 2025-07-04 18:01:39.924380 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-04 18:01:39.924978 | orchestrator | Friday 04 July 2025 18:01:39 +0000 (0:00:00.343) 0:00:00.343 *********** 2025-07-04 18:01:40.207822 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-04 18:01:40.208106 | orchestrator | 2025-07-04 18:01:40.209039 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-04 18:01:40.209957 | orchestrator | Friday 04 July 2025 18:01:40 +0000 (0:00:00.286) 0:00:00.630 *********** 2025-07-04 18:01:40.449582 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:01:40.449671 | orchestrator | 2025-07-04 18:01:40.450526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:40.451831 | orchestrator | Friday 04 July 2025 18:01:40 +0000 (0:00:00.244) 0:00:00.874 *********** 2025-07-04 18:01:40.880725 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-04 18:01:40.881218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-04 18:01:40.882310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-04 18:01:40.882782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-04 18:01:40.883375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-04 18:01:40.884981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-04 18:01:40.885938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-04 18:01:40.888433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-04 18:01:40.890224 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-04 18:01:40.891079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-04 18:01:40.891823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-04 18:01:40.892268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-04 18:01:40.893209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-04 18:01:40.893959 | orchestrator | 2025-07-04 18:01:40.895518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:40.896791 | orchestrator | Friday 04 July 2025 18:01:40 +0000 (0:00:00.430) 0:00:01.305 *********** 2025-07-04 18:01:41.399439 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:41.404159 | orchestrator | 2025-07-04 18:01:41.407281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:41.408817 | orchestrator | Friday 04 July 2025 18:01:41 +0000 (0:00:00.515) 0:00:01.821 *********** 2025-07-04 18:01:41.602112 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:41.603071 | orchestrator | 2025-07-04 18:01:41.603452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:41.605696 | orchestrator | Friday 04 July 2025 18:01:41 +0000 (0:00:00.205) 0:00:02.026 *********** 2025-07-04 18:01:41.809404 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:41.809526 | orchestrator | 2025-07-04 18:01:41.809554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:41.812685 | orchestrator | Friday 04 July 2025 18:01:41 +0000 (0:00:00.205) 0:00:02.232 *********** 2025-07-04 18:01:42.010503 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:42.010599 | orchestrator | 2025-07-04 18:01:42.011031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:42.011380 | orchestrator | Friday 04 July 2025 18:01:42 +0000 (0:00:00.202) 0:00:02.434 *********** 2025-07-04 18:01:42.219022 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:42.222545 | orchestrator | 2025-07-04 18:01:42.223629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:42.224356 | orchestrator | Friday 04 July 2025 18:01:42 +0000 (0:00:00.209) 0:00:02.644 *********** 2025-07-04 18:01:42.428040 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:42.429428 | orchestrator | 2025-07-04 18:01:42.431586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:42.432906 | orchestrator | Friday 04 July 2025 18:01:42 +0000 (0:00:00.209) 0:00:02.853 *********** 2025-07-04 18:01:42.636649 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:42.636954 | orchestrator | 2025-07-04 18:01:42.638224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:42.638377 | orchestrator | Friday 04 July 2025 18:01:42 +0000 (0:00:00.209) 0:00:03.063 *********** 2025-07-04 18:01:42.869095 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:42.871127 | orchestrator | 2025-07-04 18:01:42.872550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:42.873784 | orchestrator | Friday 04 July 2025 18:01:42 +0000 (0:00:00.230) 0:00:03.293 *********** 2025-07-04 18:01:43.331519 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd) 2025-07-04 18:01:43.333923 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd) 2025-07-04 18:01:43.336344 | orchestrator | 2025-07-04 18:01:43.336471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:43.342465 | orchestrator | Friday 04 July 2025 18:01:43 +0000 (0:00:00.459) 0:00:03.752 *********** 2025-07-04 18:01:43.851627 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f1ee158f-8183-4691-b988-cdb0b3746d63) 2025-07-04 18:01:43.854429 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f1ee158f-8183-4691-b988-cdb0b3746d63) 2025-07-04 18:01:43.854969 | orchestrator | 2025-07-04 18:01:43.856016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:43.856997 | orchestrator | Friday 04 July 2025 18:01:43 +0000 (0:00:00.522) 0:00:04.275 *********** 2025-07-04 18:01:44.670566 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cc10544f-afe1-4b17-ac35-d479dbd44023) 2025-07-04 18:01:44.672639 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cc10544f-afe1-4b17-ac35-d479dbd44023) 2025-07-04 18:01:44.673211 | orchestrator | 2025-07-04 18:01:44.673993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:44.675307 | orchestrator | Friday 04 July 2025 18:01:44 +0000 (0:00:00.817) 0:00:05.093 *********** 2025-07-04 18:01:45.365444 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c678ea0e-f232-4db4-9458-94e4077f665f) 2025-07-04 18:01:45.366461 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c678ea0e-f232-4db4-9458-94e4077f665f) 2025-07-04 18:01:45.367223 | orchestrator | 2025-07-04 18:01:45.368450 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:45.371092 | orchestrator | Friday 04 July 2025 18:01:45 +0000 (0:00:00.697) 0:00:05.790 *********** 2025-07-04 18:01:46.164091 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-04 18:01:46.164206 | orchestrator | 2025-07-04 18:01:46.167776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:46.168307 | orchestrator | Friday 04 July 2025 18:01:46 +0000 (0:00:00.796) 0:00:06.587 *********** 2025-07-04 18:01:46.657875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-04 18:01:46.660883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-04 18:01:46.662825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-04 18:01:46.664112 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-04 18:01:46.665989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-04 18:01:46.667027 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-04 18:01:46.668446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-04 18:01:46.669605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-04 18:01:46.670956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-04 18:01:46.671865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-04 18:01:46.672656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-04 18:01:46.674437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-04 18:01:46.675681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-04 18:01:46.676742 | orchestrator | 2025-07-04 18:01:46.678066 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:46.678220 | orchestrator | Friday 04 July 2025 18:01:46 +0000 (0:00:00.492) 0:00:07.080 *********** 2025-07-04 18:01:46.859745 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:46.862356 | orchestrator | 2025-07-04 18:01:46.864987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:46.866156 | orchestrator | Friday 04 July 2025 18:01:46 +0000 (0:00:00.203) 0:00:07.283 *********** 2025-07-04 18:01:47.067138 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:47.068415 | orchestrator | 2025-07-04 18:01:47.069423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:47.073416 | orchestrator | Friday 04 July 2025 18:01:47 +0000 (0:00:00.208) 0:00:07.492 *********** 2025-07-04 18:01:47.275328 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:47.277822 | orchestrator | 2025-07-04 18:01:47.277866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:47.278764 | orchestrator | Friday 04 July 2025 18:01:47 +0000 (0:00:00.208) 0:00:07.700 *********** 2025-07-04 18:01:47.501743 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:47.502109 | orchestrator | 2025-07-04 18:01:47.502183 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:47.502722 | orchestrator | Friday 04 July 2025 18:01:47 +0000 (0:00:00.223) 0:00:07.924 *********** 2025-07-04 18:01:47.735203 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:47.735345 | orchestrator | 2025-07-04 18:01:47.735357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:47.735365 | orchestrator | Friday 04 July 2025 18:01:47 +0000 (0:00:00.234) 0:00:08.158 *********** 2025-07-04 18:01:47.978210 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:47.978963 | orchestrator | 2025-07-04 18:01:47.981323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:47.983404 | orchestrator | Friday 04 July 2025 18:01:47 +0000 (0:00:00.242) 0:00:08.400 *********** 2025-07-04 18:01:48.173200 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:48.173466 | orchestrator | 2025-07-04 18:01:48.177190 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:48.179007 | orchestrator | Friday 04 July 2025 18:01:48 +0000 (0:00:00.197) 0:00:08.598 *********** 2025-07-04 18:01:48.370802 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:48.370947 | orchestrator | 2025-07-04 18:01:48.371379 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:48.371752 | orchestrator | Friday 04 July 2025 18:01:48 +0000 (0:00:00.198) 0:00:08.797 *********** 2025-07-04 18:01:49.214099 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-04 18:01:49.214190 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-04 18:01:49.216857 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-04 18:01:49.217420 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-04 18:01:49.217796 | orchestrator | 2025-07-04 18:01:49.218225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:49.218645 | orchestrator | Friday 04 July 2025 18:01:49 +0000 (0:00:00.841) 0:00:09.638 *********** 2025-07-04 18:01:49.427849 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:49.427938 | orchestrator | 2025-07-04 18:01:49.427954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:49.427966 | orchestrator | Friday 04 July 2025 18:01:49 +0000 (0:00:00.210) 0:00:09.849 *********** 2025-07-04 18:01:49.636603 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:49.640833 | orchestrator | 2025-07-04 18:01:49.641729 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:49.642347 | orchestrator | Friday 04 July 2025 18:01:49 +0000 (0:00:00.212) 0:00:10.061 *********** 2025-07-04 18:01:49.829271 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:49.829519 | orchestrator | 2025-07-04 18:01:49.830113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:01:49.830752 | orchestrator | Friday 04 July 2025 18:01:49 +0000 (0:00:00.194) 0:00:10.255 *********** 2025-07-04 18:01:50.018362 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:50.019455 | orchestrator | 2025-07-04 18:01:50.021136 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-04 18:01:50.021841 | orchestrator | Friday 04 July 2025 18:01:50 +0000 (0:00:00.189) 0:00:10.445 *********** 2025-07-04 18:01:50.194295 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-07-04 18:01:50.195011 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-07-04 18:01:50.198321 | orchestrator | 2025-07-04 18:01:50.198399 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-04 18:01:50.198418 | orchestrator | Friday 04 July 2025 18:01:50 +0000 (0:00:00.176) 0:00:10.621 *********** 2025-07-04 18:01:50.308958 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:50.309454 | orchestrator | 2025-07-04 18:01:50.310004 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-04 18:01:50.315131 | orchestrator | Friday 04 July 2025 18:01:50 +0000 (0:00:00.114) 0:00:10.735 *********** 2025-07-04 18:01:50.430418 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:50.433435 | orchestrator | 2025-07-04 18:01:50.435207 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-04 18:01:50.436292 | orchestrator | Friday 04 July 2025 18:01:50 +0000 (0:00:00.120) 0:00:10.855 *********** 2025-07-04 18:01:50.544109 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:50.545891 | orchestrator | 2025-07-04 18:01:50.547488 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-04 18:01:50.548694 | orchestrator | Friday 04 July 2025 18:01:50 +0000 (0:00:00.114) 0:00:10.970 *********** 2025-07-04 18:01:50.665094 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:01:50.665187 | orchestrator | 2025-07-04 18:01:50.666891 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-04 18:01:50.667673 | orchestrator | Friday 04 July 2025 18:01:50 +0000 (0:00:00.120) 0:00:11.090 *********** 2025-07-04 18:01:50.839137 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'}}) 2025-07-04 18:01:50.840879 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50c65579-7f86-5010-a824-2221e6b8d3f0'}}) 2025-07-04 18:01:50.842330 | orchestrator | 2025-07-04 18:01:50.843488 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-04 18:01:50.845207 | orchestrator | Friday 04 July 2025 18:01:50 +0000 (0:00:00.171) 0:00:11.262 *********** 2025-07-04 18:01:50.998186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'}})  2025-07-04 18:01:50.998375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50c65579-7f86-5010-a824-2221e6b8d3f0'}})  2025-07-04 18:01:50.998811 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:51.000850 | orchestrator | 2025-07-04 18:01:51.001476 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-04 18:01:51.004476 | orchestrator | Friday 04 July 2025 18:01:50 +0000 (0:00:00.157) 0:00:11.420 *********** 2025-07-04 18:01:51.362592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'}})  2025-07-04 18:01:51.363288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50c65579-7f86-5010-a824-2221e6b8d3f0'}})  2025-07-04 18:01:51.363933 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:51.365285 | orchestrator | 2025-07-04 18:01:51.367727 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-04 18:01:51.369413 | orchestrator | Friday 04 July 2025 18:01:51 +0000 (0:00:00.369) 0:00:11.789 *********** 2025-07-04 18:01:51.510383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'}})  2025-07-04 18:01:51.512203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50c65579-7f86-5010-a824-2221e6b8d3f0'}})  2025-07-04 18:01:51.513855 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:51.513890 | orchestrator | 2025-07-04 18:01:51.514876 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-04 18:01:51.516447 | orchestrator | Friday 04 July 2025 18:01:51 +0000 (0:00:00.146) 0:00:11.935 *********** 2025-07-04 18:01:51.657876 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:01:51.658422 | orchestrator | 2025-07-04 18:01:51.659276 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-04 18:01:51.660157 | orchestrator | Friday 04 July 2025 18:01:51 +0000 (0:00:00.147) 0:00:12.082 *********** 2025-07-04 18:01:51.814119 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:01:51.814202 | orchestrator | 2025-07-04 18:01:51.816166 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-04 18:01:51.817224 | orchestrator | Friday 04 July 2025 18:01:51 +0000 (0:00:00.155) 0:00:12.238 *********** 2025-07-04 18:01:51.950360 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:51.952055 | orchestrator | 2025-07-04 18:01:51.953409 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-04 18:01:51.955523 | orchestrator | Friday 04 July 2025 18:01:51 +0000 (0:00:00.136) 0:00:12.375 *********** 2025-07-04 18:01:52.094534 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:52.096628 | orchestrator | 2025-07-04 18:01:52.098699 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-04 18:01:52.100975 | orchestrator | Friday 04 July 2025 18:01:52 +0000 (0:00:00.144) 0:00:12.519 *********** 2025-07-04 18:01:52.283881 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:52.286417 | orchestrator | 2025-07-04 18:01:52.287226 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-04 18:01:52.288055 | orchestrator | Friday 04 July 2025 18:01:52 +0000 (0:00:00.188) 0:00:12.707 *********** 2025-07-04 18:01:52.433908 | orchestrator | ok: [testbed-node-3] => { 2025-07-04 18:01:52.436015 | orchestrator |  "ceph_osd_devices": { 2025-07-04 18:01:52.439368 | orchestrator |  "sdb": { 2025-07-04 18:01:52.439406 | orchestrator |  "osd_lvm_uuid": "32d6ac83-1783-5cc7-8f93-7bc92d6b2f36" 2025-07-04 18:01:52.439421 | orchestrator |  }, 2025-07-04 18:01:52.440065 | orchestrator |  "sdc": { 2025-07-04 18:01:52.441399 | orchestrator |  "osd_lvm_uuid": "50c65579-7f86-5010-a824-2221e6b8d3f0" 2025-07-04 18:01:52.442168 | orchestrator |  } 2025-07-04 18:01:52.442877 | orchestrator |  } 2025-07-04 18:01:52.443391 | orchestrator | } 2025-07-04 18:01:52.444324 | orchestrator | 2025-07-04 18:01:52.448193 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-04 18:01:52.448633 | orchestrator | Friday 04 July 2025 18:01:52 +0000 (0:00:00.150) 0:00:12.858 *********** 2025-07-04 18:01:52.571372 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:52.571789 | orchestrator | 2025-07-04 18:01:52.573495 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-04 18:01:52.575377 | orchestrator | Friday 04 July 2025 18:01:52 +0000 (0:00:00.139) 0:00:12.998 *********** 2025-07-04 18:01:52.728704 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:52.730549 | orchestrator | 2025-07-04 18:01:52.731031 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-04 18:01:52.735612 | orchestrator | Friday 04 July 2025 18:01:52 +0000 (0:00:00.152) 0:00:13.150 *********** 2025-07-04 18:01:52.880836 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:01:52.881201 | orchestrator | 2025-07-04 18:01:52.882600 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-04 18:01:52.884048 | orchestrator | Friday 04 July 2025 18:01:52 +0000 (0:00:00.155) 0:00:13.306 *********** 2025-07-04 18:01:53.128659 | orchestrator | changed: [testbed-node-3] => { 2025-07-04 18:01:53.130449 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-04 18:01:53.133903 | orchestrator |  "ceph_osd_devices": { 2025-07-04 18:01:53.137091 | orchestrator |  "sdb": { 2025-07-04 18:01:53.137402 | orchestrator |  "osd_lvm_uuid": "32d6ac83-1783-5cc7-8f93-7bc92d6b2f36" 2025-07-04 18:01:53.137864 | orchestrator |  }, 2025-07-04 18:01:53.138459 | orchestrator |  "sdc": { 2025-07-04 18:01:53.139333 | orchestrator |  "osd_lvm_uuid": "50c65579-7f86-5010-a824-2221e6b8d3f0" 2025-07-04 18:01:53.139779 | orchestrator |  } 2025-07-04 18:01:53.139991 | orchestrator |  }, 2025-07-04 18:01:53.140439 | orchestrator |  "lvm_volumes": [ 2025-07-04 18:01:53.141196 | orchestrator |  { 2025-07-04 18:01:53.142331 | orchestrator |  "data": "osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36", 2025-07-04 18:01:53.142660 | orchestrator |  "data_vg": "ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36" 2025-07-04 18:01:53.143581 | orchestrator |  }, 2025-07-04 18:01:53.144069 | orchestrator |  { 2025-07-04 18:01:53.144903 | orchestrator |  "data": "osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0", 2025-07-04 18:01:53.146398 | orchestrator |  "data_vg": "ceph-50c65579-7f86-5010-a824-2221e6b8d3f0" 2025-07-04 18:01:53.148291 | orchestrator |  } 2025-07-04 18:01:53.149154 | orchestrator |  ] 2025-07-04 18:01:53.149954 | orchestrator |  } 2025-07-04 18:01:53.150152 | orchestrator | } 2025-07-04 18:01:53.151409 | orchestrator | 2025-07-04 18:01:53.152044 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-04 18:01:53.152533 | orchestrator | Friday 04 July 2025 18:01:53 +0000 (0:00:00.246) 0:00:13.552 *********** 2025-07-04 18:01:55.577445 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-04 18:01:55.578131 | orchestrator | 2025-07-04 18:01:55.578159 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-04 18:01:55.579750 | orchestrator | 2025-07-04 18:01:55.580852 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-04 18:01:55.584370 | orchestrator | Friday 04 July 2025 18:01:55 +0000 (0:00:02.449) 0:00:16.002 *********** 2025-07-04 18:01:55.836435 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-04 18:01:55.836603 | orchestrator | 2025-07-04 18:01:55.837843 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-04 18:01:55.838952 | orchestrator | Friday 04 July 2025 18:01:55 +0000 (0:00:00.259) 0:00:16.261 *********** 2025-07-04 18:01:56.083681 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:01:56.084968 | orchestrator | 2025-07-04 18:01:56.085940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:56.091860 | orchestrator | Friday 04 July 2025 18:01:56 +0000 (0:00:00.245) 0:00:16.507 *********** 2025-07-04 18:01:56.510999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-04 18:01:56.511123 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-04 18:01:56.511280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-04 18:01:56.511301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-04 18:01:56.512509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-04 18:01:56.512601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-04 18:01:56.512646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-04 18:01:56.513339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-04 18:01:56.513769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-04 18:01:56.513959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-04 18:01:56.515764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-04 18:01:56.516020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-04 18:01:56.516832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-04 18:01:56.519787 | orchestrator | 2025-07-04 18:01:56.519866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:56.520072 | orchestrator | Friday 04 July 2025 18:01:56 +0000 (0:00:00.426) 0:00:16.933 *********** 2025-07-04 18:01:56.725725 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:01:56.725817 | orchestrator | 2025-07-04 18:01:56.726288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:56.729871 | orchestrator | Friday 04 July 2025 18:01:56 +0000 (0:00:00.218) 0:00:17.152 *********** 2025-07-04 18:01:56.931198 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:01:56.931441 | orchestrator | 2025-07-04 18:01:56.932978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:56.938222 | orchestrator | Friday 04 July 2025 18:01:56 +0000 (0:00:00.202) 0:00:17.355 *********** 2025-07-04 18:01:57.152853 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:01:57.153826 | orchestrator | 2025-07-04 18:01:57.156073 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:57.157401 | orchestrator | Friday 04 July 2025 18:01:57 +0000 (0:00:00.222) 0:00:17.578 *********** 2025-07-04 18:01:57.370463 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:01:57.373880 | orchestrator | 2025-07-04 18:01:57.375287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:57.376227 | orchestrator | Friday 04 July 2025 18:01:57 +0000 (0:00:00.213) 0:00:17.792 *********** 2025-07-04 18:01:57.848483 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:01:57.848572 | orchestrator | 2025-07-04 18:01:57.849072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:57.849816 | orchestrator | Friday 04 July 2025 18:01:57 +0000 (0:00:00.478) 0:00:18.270 *********** 2025-07-04 18:01:58.032051 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:01:58.032140 | orchestrator | 2025-07-04 18:01:58.032155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:58.032480 | orchestrator | Friday 04 July 2025 18:01:58 +0000 (0:00:00.186) 0:00:18.457 *********** 2025-07-04 18:01:58.269900 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:01:58.269990 | orchestrator | 2025-07-04 18:01:58.270007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:58.271087 | orchestrator | Friday 04 July 2025 18:01:58 +0000 (0:00:00.237) 0:00:18.694 *********** 2025-07-04 18:01:58.446903 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:01:58.447360 | orchestrator | 2025-07-04 18:01:58.447395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:58.449769 | orchestrator | Friday 04 July 2025 18:01:58 +0000 (0:00:00.177) 0:00:18.872 *********** 2025-07-04 18:01:58.840637 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e) 2025-07-04 18:01:58.840746 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e) 2025-07-04 18:01:58.840980 | orchestrator | 2025-07-04 18:01:58.841019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:58.841165 | orchestrator | Friday 04 July 2025 18:01:58 +0000 (0:00:00.395) 0:00:19.267 *********** 2025-07-04 18:01:59.227489 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_22af1316-5bc1-4af9-ac7a-65db3b57cabb) 2025-07-04 18:01:59.232565 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_22af1316-5bc1-4af9-ac7a-65db3b57cabb) 2025-07-04 18:01:59.233360 | orchestrator | 2025-07-04 18:01:59.235281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:59.239220 | orchestrator | Friday 04 July 2025 18:01:59 +0000 (0:00:00.386) 0:00:19.654 *********** 2025-07-04 18:01:59.627056 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f2e9dc75-50de-4afc-bb89-e69d1400c858) 2025-07-04 18:01:59.628686 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f2e9dc75-50de-4afc-bb89-e69d1400c858) 2025-07-04 18:01:59.632043 | orchestrator | 2025-07-04 18:01:59.633193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:01:59.634146 | orchestrator | Friday 04 July 2025 18:01:59 +0000 (0:00:00.399) 0:00:20.053 *********** 2025-07-04 18:02:00.050695 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9dcda133-58d2-4853-8afe-c4a876875c80) 2025-07-04 18:02:00.053087 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9dcda133-58d2-4853-8afe-c4a876875c80) 2025-07-04 18:02:00.053153 | orchestrator | 2025-07-04 18:02:00.053170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:00.053270 | orchestrator | Friday 04 July 2025 18:02:00 +0000 (0:00:00.420) 0:00:20.474 *********** 2025-07-04 18:02:00.396068 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-04 18:02:00.396221 | orchestrator | 2025-07-04 18:02:00.397437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:00.397685 | orchestrator | Friday 04 July 2025 18:02:00 +0000 (0:00:00.348) 0:00:20.823 *********** 2025-07-04 18:02:00.800031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-04 18:02:00.802212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-04 18:02:00.802277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-04 18:02:00.803476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-04 18:02:00.804693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-04 18:02:00.805649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-04 18:02:00.806802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-04 18:02:00.808048 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-04 18:02:00.809069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-04 18:02:00.810254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-04 18:02:00.810857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-04 18:02:00.811573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-04 18:02:00.811905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-04 18:02:00.812655 | orchestrator | 2025-07-04 18:02:00.813988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:00.814819 | orchestrator | Friday 04 July 2025 18:02:00 +0000 (0:00:00.400) 0:00:21.223 *********** 2025-07-04 18:02:01.006385 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:01.007216 | orchestrator | 2025-07-04 18:02:01.009273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:01.009655 | orchestrator | Friday 04 July 2025 18:02:01 +0000 (0:00:00.208) 0:00:21.432 *********** 2025-07-04 18:02:01.566881 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:01.567146 | orchestrator | 2025-07-04 18:02:01.568512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:01.570898 | orchestrator | Friday 04 July 2025 18:02:01 +0000 (0:00:00.559) 0:00:21.992 *********** 2025-07-04 18:02:01.780876 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:01.781072 | orchestrator | 2025-07-04 18:02:01.783150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:01.783188 | orchestrator | Friday 04 July 2025 18:02:01 +0000 (0:00:00.212) 0:00:22.205 *********** 2025-07-04 18:02:02.070115 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:02.070829 | orchestrator | 2025-07-04 18:02:02.071612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:02.073850 | orchestrator | Friday 04 July 2025 18:02:02 +0000 (0:00:00.290) 0:00:22.495 *********** 2025-07-04 18:02:02.301811 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:02.302302 | orchestrator | 2025-07-04 18:02:02.303765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:02.304802 | orchestrator | Friday 04 July 2025 18:02:02 +0000 (0:00:00.231) 0:00:22.726 *********** 2025-07-04 18:02:02.506645 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:02.508125 | orchestrator | 2025-07-04 18:02:02.508846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:02.510083 | orchestrator | Friday 04 July 2025 18:02:02 +0000 (0:00:00.205) 0:00:22.931 *********** 2025-07-04 18:02:02.724187 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:02.724406 | orchestrator | 2025-07-04 18:02:02.726604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:02.728590 | orchestrator | Friday 04 July 2025 18:02:02 +0000 (0:00:00.216) 0:00:23.148 *********** 2025-07-04 18:02:02.913990 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:02.914850 | orchestrator | 2025-07-04 18:02:02.915691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:02.917318 | orchestrator | Friday 04 July 2025 18:02:02 +0000 (0:00:00.191) 0:00:23.339 *********** 2025-07-04 18:02:03.619718 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-04 18:02:03.621803 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-04 18:02:03.624607 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-04 18:02:03.625430 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-04 18:02:03.625796 | orchestrator | 2025-07-04 18:02:03.626498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:03.627008 | orchestrator | Friday 04 July 2025 18:02:03 +0000 (0:00:00.705) 0:00:24.044 *********** 2025-07-04 18:02:03.854340 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:03.855737 | orchestrator | 2025-07-04 18:02:03.856829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:03.858999 | orchestrator | Friday 04 July 2025 18:02:03 +0000 (0:00:00.234) 0:00:24.279 *********** 2025-07-04 18:02:04.059030 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:04.059138 | orchestrator | 2025-07-04 18:02:04.059154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:04.059167 | orchestrator | Friday 04 July 2025 18:02:04 +0000 (0:00:00.202) 0:00:24.482 *********** 2025-07-04 18:02:04.253763 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:04.255869 | orchestrator | 2025-07-04 18:02:04.256483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:04.258468 | orchestrator | Friday 04 July 2025 18:02:04 +0000 (0:00:00.196) 0:00:24.678 *********** 2025-07-04 18:02:04.478823 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:04.478950 | orchestrator | 2025-07-04 18:02:04.479048 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-04 18:02:04.479668 | orchestrator | Friday 04 July 2025 18:02:04 +0000 (0:00:00.226) 0:00:24.904 *********** 2025-07-04 18:02:04.915501 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-07-04 18:02:04.915644 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-07-04 18:02:04.915979 | orchestrator | 2025-07-04 18:02:04.919942 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-04 18:02:04.920887 | orchestrator | Friday 04 July 2025 18:02:04 +0000 (0:00:00.433) 0:00:25.338 *********** 2025-07-04 18:02:05.062376 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:05.062712 | orchestrator | 2025-07-04 18:02:05.063082 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-04 18:02:05.063577 | orchestrator | Friday 04 July 2025 18:02:05 +0000 (0:00:00.149) 0:00:25.487 *********** 2025-07-04 18:02:05.241375 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:05.244767 | orchestrator | 2025-07-04 18:02:05.244834 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-04 18:02:05.244842 | orchestrator | Friday 04 July 2025 18:02:05 +0000 (0:00:00.179) 0:00:25.666 *********** 2025-07-04 18:02:05.378004 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:05.379686 | orchestrator | 2025-07-04 18:02:05.380918 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-04 18:02:05.382882 | orchestrator | Friday 04 July 2025 18:02:05 +0000 (0:00:00.136) 0:00:25.803 *********** 2025-07-04 18:02:05.530149 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:02:05.531152 | orchestrator | 2025-07-04 18:02:05.534178 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-04 18:02:05.534755 | orchestrator | Friday 04 July 2025 18:02:05 +0000 (0:00:00.150) 0:00:25.954 *********** 2025-07-04 18:02:05.726867 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0c11b362-ac03-5009-be6f-11a9ef5f18dc'}}) 2025-07-04 18:02:05.730641 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b396848d-3790-5c5a-8f8a-1e47b4270a43'}}) 2025-07-04 18:02:05.736883 | orchestrator | 2025-07-04 18:02:05.739608 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-04 18:02:05.739775 | orchestrator | Friday 04 July 2025 18:02:05 +0000 (0:00:00.195) 0:00:26.150 *********** 2025-07-04 18:02:05.888647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0c11b362-ac03-5009-be6f-11a9ef5f18dc'}})  2025-07-04 18:02:05.890744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b396848d-3790-5c5a-8f8a-1e47b4270a43'}})  2025-07-04 18:02:05.893479 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:05.895905 | orchestrator | 2025-07-04 18:02:05.898593 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-04 18:02:05.902598 | orchestrator | Friday 04 July 2025 18:02:05 +0000 (0:00:00.162) 0:00:26.313 *********** 2025-07-04 18:02:06.119611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0c11b362-ac03-5009-be6f-11a9ef5f18dc'}})  2025-07-04 18:02:06.122168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b396848d-3790-5c5a-8f8a-1e47b4270a43'}})  2025-07-04 18:02:06.122372 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:06.124489 | orchestrator | 2025-07-04 18:02:06.124663 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-04 18:02:06.126527 | orchestrator | Friday 04 July 2025 18:02:06 +0000 (0:00:00.231) 0:00:26.544 *********** 2025-07-04 18:02:06.266244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0c11b362-ac03-5009-be6f-11a9ef5f18dc'}})  2025-07-04 18:02:06.268758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b396848d-3790-5c5a-8f8a-1e47b4270a43'}})  2025-07-04 18:02:06.271862 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:06.271894 | orchestrator | 2025-07-04 18:02:06.271926 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-04 18:02:06.273744 | orchestrator | Friday 04 July 2025 18:02:06 +0000 (0:00:00.146) 0:00:26.691 *********** 2025-07-04 18:02:06.419177 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:02:06.420687 | orchestrator | 2025-07-04 18:02:06.422283 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-04 18:02:06.424768 | orchestrator | Friday 04 July 2025 18:02:06 +0000 (0:00:00.151) 0:00:26.843 *********** 2025-07-04 18:02:06.566737 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:02:06.568440 | orchestrator | 2025-07-04 18:02:06.569543 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-04 18:02:06.572048 | orchestrator | Friday 04 July 2025 18:02:06 +0000 (0:00:00.148) 0:00:26.991 *********** 2025-07-04 18:02:06.715482 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:06.719185 | orchestrator | 2025-07-04 18:02:06.719302 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-04 18:02:06.720123 | orchestrator | Friday 04 July 2025 18:02:06 +0000 (0:00:00.147) 0:00:27.139 *********** 2025-07-04 18:02:07.050720 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:07.053066 | orchestrator | 2025-07-04 18:02:07.054940 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-04 18:02:07.056434 | orchestrator | Friday 04 July 2025 18:02:07 +0000 (0:00:00.335) 0:00:27.474 *********** 2025-07-04 18:02:07.195765 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:07.196556 | orchestrator | 2025-07-04 18:02:07.197952 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-04 18:02:07.199103 | orchestrator | Friday 04 July 2025 18:02:07 +0000 (0:00:00.141) 0:00:27.616 *********** 2025-07-04 18:02:07.355205 | orchestrator | ok: [testbed-node-4] => { 2025-07-04 18:02:07.358703 | orchestrator |  "ceph_osd_devices": { 2025-07-04 18:02:07.360617 | orchestrator |  "sdb": { 2025-07-04 18:02:07.364994 | orchestrator |  "osd_lvm_uuid": "0c11b362-ac03-5009-be6f-11a9ef5f18dc" 2025-07-04 18:02:07.368777 | orchestrator |  }, 2025-07-04 18:02:07.370812 | orchestrator |  "sdc": { 2025-07-04 18:02:07.372491 | orchestrator |  "osd_lvm_uuid": "b396848d-3790-5c5a-8f8a-1e47b4270a43" 2025-07-04 18:02:07.373988 | orchestrator |  } 2025-07-04 18:02:07.375012 | orchestrator |  } 2025-07-04 18:02:07.375925 | orchestrator | } 2025-07-04 18:02:07.377031 | orchestrator | 2025-07-04 18:02:07.378119 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-04 18:02:07.378923 | orchestrator | Friday 04 July 2025 18:02:07 +0000 (0:00:00.162) 0:00:27.779 *********** 2025-07-04 18:02:07.511376 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:07.512468 | orchestrator | 2025-07-04 18:02:07.516302 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-04 18:02:07.517859 | orchestrator | Friday 04 July 2025 18:02:07 +0000 (0:00:00.156) 0:00:27.935 *********** 2025-07-04 18:02:07.649767 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:07.650550 | orchestrator | 2025-07-04 18:02:07.651812 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-04 18:02:07.652787 | orchestrator | Friday 04 July 2025 18:02:07 +0000 (0:00:00.140) 0:00:28.076 *********** 2025-07-04 18:02:07.781031 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:02:07.782560 | orchestrator | 2025-07-04 18:02:07.786823 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-04 18:02:07.788192 | orchestrator | Friday 04 July 2025 18:02:07 +0000 (0:00:00.128) 0:00:28.204 *********** 2025-07-04 18:02:07.989209 | orchestrator | changed: [testbed-node-4] => { 2025-07-04 18:02:07.990371 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-04 18:02:07.991698 | orchestrator |  "ceph_osd_devices": { 2025-07-04 18:02:07.993245 | orchestrator |  "sdb": { 2025-07-04 18:02:07.996167 | orchestrator |  "osd_lvm_uuid": "0c11b362-ac03-5009-be6f-11a9ef5f18dc" 2025-07-04 18:02:07.996843 | orchestrator |  }, 2025-07-04 18:02:07.997691 | orchestrator |  "sdc": { 2025-07-04 18:02:07.998273 | orchestrator |  "osd_lvm_uuid": "b396848d-3790-5c5a-8f8a-1e47b4270a43" 2025-07-04 18:02:07.999305 | orchestrator |  } 2025-07-04 18:02:08.001171 | orchestrator |  }, 2025-07-04 18:02:08.001889 | orchestrator |  "lvm_volumes": [ 2025-07-04 18:02:08.002992 | orchestrator |  { 2025-07-04 18:02:08.004028 | orchestrator |  "data": "osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc", 2025-07-04 18:02:08.004514 | orchestrator |  "data_vg": "ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc" 2025-07-04 18:02:08.006808 | orchestrator |  }, 2025-07-04 18:02:08.007344 | orchestrator |  { 2025-07-04 18:02:08.009197 | orchestrator |  "data": "osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43", 2025-07-04 18:02:08.010402 | orchestrator |  "data_vg": "ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43" 2025-07-04 18:02:08.012320 | orchestrator |  } 2025-07-04 18:02:08.012999 | orchestrator |  ] 2025-07-04 18:02:08.013988 | orchestrator |  } 2025-07-04 18:02:08.014939 | orchestrator | } 2025-07-04 18:02:08.015959 | orchestrator | 2025-07-04 18:02:08.016392 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-04 18:02:08.016854 | orchestrator | Friday 04 July 2025 18:02:07 +0000 (0:00:00.209) 0:00:28.414 *********** 2025-07-04 18:02:09.078495 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-04 18:02:09.078723 | orchestrator | 2025-07-04 18:02:09.079702 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-04 18:02:09.084587 | orchestrator | 2025-07-04 18:02:09.084955 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-04 18:02:09.085845 | orchestrator | Friday 04 July 2025 18:02:09 +0000 (0:00:01.088) 0:00:29.503 *********** 2025-07-04 18:02:09.575162 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-04 18:02:09.578686 | orchestrator | 2025-07-04 18:02:09.578766 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-04 18:02:09.578786 | orchestrator | Friday 04 July 2025 18:02:09 +0000 (0:00:00.496) 0:00:30.000 *********** 2025-07-04 18:02:10.370117 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:02:10.371658 | orchestrator | 2025-07-04 18:02:10.372757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:10.373875 | orchestrator | Friday 04 July 2025 18:02:10 +0000 (0:00:00.793) 0:00:30.793 *********** 2025-07-04 18:02:10.806998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-04 18:02:10.809339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-04 18:02:10.810368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-04 18:02:10.810709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-04 18:02:10.811809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-04 18:02:10.812700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-04 18:02:10.813585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-04 18:02:10.814687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-04 18:02:10.815278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-04 18:02:10.816434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-04 18:02:10.817385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-04 18:02:10.818190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-04 18:02:10.819438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-04 18:02:10.820126 | orchestrator | 2025-07-04 18:02:10.820826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:10.821239 | orchestrator | Friday 04 July 2025 18:02:10 +0000 (0:00:00.436) 0:00:31.230 *********** 2025-07-04 18:02:11.075341 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:11.075438 | orchestrator | 2025-07-04 18:02:11.076781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:11.079401 | orchestrator | Friday 04 July 2025 18:02:11 +0000 (0:00:00.270) 0:00:31.500 *********** 2025-07-04 18:02:11.299941 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:11.300257 | orchestrator | 2025-07-04 18:02:11.303099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:11.304600 | orchestrator | Friday 04 July 2025 18:02:11 +0000 (0:00:00.224) 0:00:31.725 *********** 2025-07-04 18:02:11.523515 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:11.525495 | orchestrator | 2025-07-04 18:02:11.527004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:11.527551 | orchestrator | Friday 04 July 2025 18:02:11 +0000 (0:00:00.223) 0:00:31.948 *********** 2025-07-04 18:02:11.760760 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:11.763842 | orchestrator | 2025-07-04 18:02:11.764806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:11.766999 | orchestrator | Friday 04 July 2025 18:02:11 +0000 (0:00:00.236) 0:00:32.184 *********** 2025-07-04 18:02:11.968532 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:11.970070 | orchestrator | 2025-07-04 18:02:11.970108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:11.972032 | orchestrator | Friday 04 July 2025 18:02:11 +0000 (0:00:00.208) 0:00:32.393 *********** 2025-07-04 18:02:12.162309 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:12.165450 | orchestrator | 2025-07-04 18:02:12.165503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:12.166467 | orchestrator | Friday 04 July 2025 18:02:12 +0000 (0:00:00.194) 0:00:32.588 *********** 2025-07-04 18:02:12.384213 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:12.384695 | orchestrator | 2025-07-04 18:02:12.385776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:12.387326 | orchestrator | Friday 04 July 2025 18:02:12 +0000 (0:00:00.221) 0:00:32.809 *********** 2025-07-04 18:02:12.601150 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:12.602615 | orchestrator | 2025-07-04 18:02:12.603697 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:12.604279 | orchestrator | Friday 04 July 2025 18:02:12 +0000 (0:00:00.214) 0:00:33.024 *********** 2025-07-04 18:02:13.262920 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958) 2025-07-04 18:02:13.264981 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958) 2025-07-04 18:02:13.266063 | orchestrator | 2025-07-04 18:02:13.267443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:13.268467 | orchestrator | Friday 04 July 2025 18:02:13 +0000 (0:00:00.660) 0:00:33.685 *********** 2025-07-04 18:02:14.172657 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cc9ae976-88cb-4b21-9449-d8985ff12d4f) 2025-07-04 18:02:14.172843 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cc9ae976-88cb-4b21-9449-d8985ff12d4f) 2025-07-04 18:02:14.174722 | orchestrator | 2025-07-04 18:02:14.175691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:14.176191 | orchestrator | Friday 04 July 2025 18:02:14 +0000 (0:00:00.911) 0:00:34.596 *********** 2025-07-04 18:02:14.613991 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d957e37b-6f48-487c-9682-d56dbc604f5a) 2025-07-04 18:02:14.614451 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d957e37b-6f48-487c-9682-d56dbc604f5a) 2025-07-04 18:02:14.615750 | orchestrator | 2025-07-04 18:02:14.616907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:14.618114 | orchestrator | Friday 04 July 2025 18:02:14 +0000 (0:00:00.443) 0:00:35.039 *********** 2025-07-04 18:02:15.056805 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_36831ba3-00a3-40d1-8c8d-d5688ce5b92e) 2025-07-04 18:02:15.057748 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_36831ba3-00a3-40d1-8c8d-d5688ce5b92e) 2025-07-04 18:02:15.059208 | orchestrator | 2025-07-04 18:02:15.059260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:02:15.059852 | orchestrator | Friday 04 July 2025 18:02:15 +0000 (0:00:00.441) 0:00:35.481 *********** 2025-07-04 18:02:15.397692 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-04 18:02:15.397988 | orchestrator | 2025-07-04 18:02:15.398882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:15.399833 | orchestrator | Friday 04 July 2025 18:02:15 +0000 (0:00:00.342) 0:00:35.823 *********** 2025-07-04 18:02:15.796047 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-04 18:02:15.796740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-04 18:02:15.799438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-04 18:02:15.799454 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-04 18:02:15.800122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-04 18:02:15.801812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-04 18:02:15.803708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-04 18:02:15.804379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-04 18:02:15.805284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-04 18:02:15.805770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-04 18:02:15.807293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-04 18:02:15.808109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-04 18:02:15.809130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-04 18:02:15.809727 | orchestrator | 2025-07-04 18:02:15.810509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:15.811107 | orchestrator | Friday 04 July 2025 18:02:15 +0000 (0:00:00.397) 0:00:36.221 *********** 2025-07-04 18:02:16.019917 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:16.020188 | orchestrator | 2025-07-04 18:02:16.021893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:16.022010 | orchestrator | Friday 04 July 2025 18:02:16 +0000 (0:00:00.222) 0:00:36.443 *********** 2025-07-04 18:02:16.232616 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:16.233850 | orchestrator | 2025-07-04 18:02:16.234944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:16.236655 | orchestrator | Friday 04 July 2025 18:02:16 +0000 (0:00:00.214) 0:00:36.657 *********** 2025-07-04 18:02:16.433853 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:16.435952 | orchestrator | 2025-07-04 18:02:16.438108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:16.439055 | orchestrator | Friday 04 July 2025 18:02:16 +0000 (0:00:00.200) 0:00:36.858 *********** 2025-07-04 18:02:16.669094 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:16.671024 | orchestrator | 2025-07-04 18:02:16.671172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:16.671812 | orchestrator | Friday 04 July 2025 18:02:16 +0000 (0:00:00.236) 0:00:37.094 *********** 2025-07-04 18:02:16.875585 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:16.875952 | orchestrator | 2025-07-04 18:02:16.876940 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:16.880815 | orchestrator | Friday 04 July 2025 18:02:16 +0000 (0:00:00.205) 0:00:37.300 *********** 2025-07-04 18:02:17.546576 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:17.547881 | orchestrator | 2025-07-04 18:02:17.550382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:17.550418 | orchestrator | Friday 04 July 2025 18:02:17 +0000 (0:00:00.671) 0:00:37.972 *********** 2025-07-04 18:02:17.748992 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:17.749849 | orchestrator | 2025-07-04 18:02:17.750781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:17.752951 | orchestrator | Friday 04 July 2025 18:02:17 +0000 (0:00:00.201) 0:00:38.174 *********** 2025-07-04 18:02:17.964972 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:17.970355 | orchestrator | 2025-07-04 18:02:17.970502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:17.971655 | orchestrator | Friday 04 July 2025 18:02:17 +0000 (0:00:00.215) 0:00:38.389 *********** 2025-07-04 18:02:18.684416 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-04 18:02:18.685069 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-04 18:02:18.687432 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-04 18:02:18.687581 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-04 18:02:18.687980 | orchestrator | 2025-07-04 18:02:18.689092 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:18.689455 | orchestrator | Friday 04 July 2025 18:02:18 +0000 (0:00:00.719) 0:00:39.108 *********** 2025-07-04 18:02:18.912446 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:18.912998 | orchestrator | 2025-07-04 18:02:18.914115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:18.914692 | orchestrator | Friday 04 July 2025 18:02:18 +0000 (0:00:00.228) 0:00:39.337 *********** 2025-07-04 18:02:19.116068 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:19.116549 | orchestrator | 2025-07-04 18:02:19.117244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:19.117860 | orchestrator | Friday 04 July 2025 18:02:19 +0000 (0:00:00.204) 0:00:39.542 *********** 2025-07-04 18:02:19.311621 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:19.312886 | orchestrator | 2025-07-04 18:02:19.314425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:02:19.316386 | orchestrator | Friday 04 July 2025 18:02:19 +0000 (0:00:00.194) 0:00:39.736 *********** 2025-07-04 18:02:19.512364 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:19.513008 | orchestrator | 2025-07-04 18:02:19.513934 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-04 18:02:19.516320 | orchestrator | Friday 04 July 2025 18:02:19 +0000 (0:00:00.200) 0:00:39.937 *********** 2025-07-04 18:02:19.690388 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-07-04 18:02:19.691524 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-07-04 18:02:19.693717 | orchestrator | 2025-07-04 18:02:19.694188 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-04 18:02:19.694878 | orchestrator | Friday 04 July 2025 18:02:19 +0000 (0:00:00.177) 0:00:40.114 *********** 2025-07-04 18:02:19.837693 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:19.839090 | orchestrator | 2025-07-04 18:02:19.839952 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-04 18:02:19.841688 | orchestrator | Friday 04 July 2025 18:02:19 +0000 (0:00:00.149) 0:00:40.264 *********** 2025-07-04 18:02:19.968309 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:19.968744 | orchestrator | 2025-07-04 18:02:19.970204 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-04 18:02:19.971666 | orchestrator | Friday 04 July 2025 18:02:19 +0000 (0:00:00.128) 0:00:40.392 *********** 2025-07-04 18:02:20.101195 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:20.102471 | orchestrator | 2025-07-04 18:02:20.104771 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-04 18:02:20.104800 | orchestrator | Friday 04 July 2025 18:02:20 +0000 (0:00:00.134) 0:00:40.526 *********** 2025-07-04 18:02:20.459065 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:02:20.459829 | orchestrator | 2025-07-04 18:02:20.460783 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-04 18:02:20.461452 | orchestrator | Friday 04 July 2025 18:02:20 +0000 (0:00:00.355) 0:00:40.882 *********** 2025-07-04 18:02:20.647504 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'}}) 2025-07-04 18:02:20.649236 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '38a85088-e19d-56c7-801b-f45e1c084bd2'}}) 2025-07-04 18:02:20.650195 | orchestrator | 2025-07-04 18:02:20.651846 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-04 18:02:20.652898 | orchestrator | Friday 04 July 2025 18:02:20 +0000 (0:00:00.190) 0:00:41.072 *********** 2025-07-04 18:02:20.814631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'}})  2025-07-04 18:02:20.815760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '38a85088-e19d-56c7-801b-f45e1c084bd2'}})  2025-07-04 18:02:20.818469 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:20.819328 | orchestrator | 2025-07-04 18:02:20.820320 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-04 18:02:20.821106 | orchestrator | Friday 04 July 2025 18:02:20 +0000 (0:00:00.167) 0:00:41.240 *********** 2025-07-04 18:02:20.980850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'}})  2025-07-04 18:02:20.981068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '38a85088-e19d-56c7-801b-f45e1c084bd2'}})  2025-07-04 18:02:20.981615 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:20.982147 | orchestrator | 2025-07-04 18:02:20.983805 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-04 18:02:20.983838 | orchestrator | Friday 04 July 2025 18:02:20 +0000 (0:00:00.164) 0:00:41.405 *********** 2025-07-04 18:02:21.140201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'}})  2025-07-04 18:02:21.141314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '38a85088-e19d-56c7-801b-f45e1c084bd2'}})  2025-07-04 18:02:21.143986 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:21.144838 | orchestrator | 2025-07-04 18:02:21.146119 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-04 18:02:21.147028 | orchestrator | Friday 04 July 2025 18:02:21 +0000 (0:00:00.158) 0:00:41.564 *********** 2025-07-04 18:02:21.307266 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:02:21.307373 | orchestrator | 2025-07-04 18:02:21.308049 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-04 18:02:21.310305 | orchestrator | Friday 04 July 2025 18:02:21 +0000 (0:00:00.166) 0:00:41.730 *********** 2025-07-04 18:02:21.476848 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:02:21.477036 | orchestrator | 2025-07-04 18:02:21.478987 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-04 18:02:21.479761 | orchestrator | Friday 04 July 2025 18:02:21 +0000 (0:00:00.171) 0:00:41.902 *********** 2025-07-04 18:02:21.657959 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:21.659205 | orchestrator | 2025-07-04 18:02:21.659413 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-04 18:02:21.659974 | orchestrator | Friday 04 July 2025 18:02:21 +0000 (0:00:00.180) 0:00:42.082 *********** 2025-07-04 18:02:21.826256 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:21.826615 | orchestrator | 2025-07-04 18:02:21.828347 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-04 18:02:21.835494 | orchestrator | Friday 04 July 2025 18:02:21 +0000 (0:00:00.167) 0:00:42.250 *********** 2025-07-04 18:02:21.979401 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:21.979905 | orchestrator | 2025-07-04 18:02:21.983462 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-04 18:02:21.984058 | orchestrator | Friday 04 July 2025 18:02:21 +0000 (0:00:00.153) 0:00:42.403 *********** 2025-07-04 18:02:22.121865 | orchestrator | ok: [testbed-node-5] => { 2025-07-04 18:02:22.122757 | orchestrator |  "ceph_osd_devices": { 2025-07-04 18:02:22.123579 | orchestrator |  "sdb": { 2025-07-04 18:02:22.125036 | orchestrator |  "osd_lvm_uuid": "a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6" 2025-07-04 18:02:22.126063 | orchestrator |  }, 2025-07-04 18:02:22.127188 | orchestrator |  "sdc": { 2025-07-04 18:02:22.127447 | orchestrator |  "osd_lvm_uuid": "38a85088-e19d-56c7-801b-f45e1c084bd2" 2025-07-04 18:02:22.128100 | orchestrator |  } 2025-07-04 18:02:22.128773 | orchestrator |  } 2025-07-04 18:02:22.128963 | orchestrator | } 2025-07-04 18:02:22.129497 | orchestrator | 2025-07-04 18:02:22.130287 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-04 18:02:22.130411 | orchestrator | Friday 04 July 2025 18:02:22 +0000 (0:00:00.142) 0:00:42.546 *********** 2025-07-04 18:02:22.286437 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:22.286543 | orchestrator | 2025-07-04 18:02:22.286559 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-04 18:02:22.286573 | orchestrator | Friday 04 July 2025 18:02:22 +0000 (0:00:00.165) 0:00:42.711 *********** 2025-07-04 18:02:22.675417 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:22.676302 | orchestrator | 2025-07-04 18:02:22.676338 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-04 18:02:22.676587 | orchestrator | Friday 04 July 2025 18:02:22 +0000 (0:00:00.388) 0:00:43.100 *********** 2025-07-04 18:02:22.818290 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:02:22.819142 | orchestrator | 2025-07-04 18:02:22.821971 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-04 18:02:22.822089 | orchestrator | Friday 04 July 2025 18:02:22 +0000 (0:00:00.141) 0:00:43.242 *********** 2025-07-04 18:02:23.053775 | orchestrator | changed: [testbed-node-5] => { 2025-07-04 18:02:23.054887 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-04 18:02:23.055946 | orchestrator |  "ceph_osd_devices": { 2025-07-04 18:02:23.058726 | orchestrator |  "sdb": { 2025-07-04 18:02:23.062247 | orchestrator |  "osd_lvm_uuid": "a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6" 2025-07-04 18:02:23.062342 | orchestrator |  }, 2025-07-04 18:02:23.062356 | orchestrator |  "sdc": { 2025-07-04 18:02:23.062367 | orchestrator |  "osd_lvm_uuid": "38a85088-e19d-56c7-801b-f45e1c084bd2" 2025-07-04 18:02:23.062378 | orchestrator |  } 2025-07-04 18:02:23.062389 | orchestrator |  }, 2025-07-04 18:02:23.062400 | orchestrator |  "lvm_volumes": [ 2025-07-04 18:02:23.062497 | orchestrator |  { 2025-07-04 18:02:23.063028 | orchestrator |  "data": "osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6", 2025-07-04 18:02:23.063052 | orchestrator |  "data_vg": "ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6" 2025-07-04 18:02:23.063392 | orchestrator |  }, 2025-07-04 18:02:23.063682 | orchestrator |  { 2025-07-04 18:02:23.064500 | orchestrator |  "data": "osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2", 2025-07-04 18:02:23.065047 | orchestrator |  "data_vg": "ceph-38a85088-e19d-56c7-801b-f45e1c084bd2" 2025-07-04 18:02:23.065567 | orchestrator |  } 2025-07-04 18:02:23.066563 | orchestrator |  ] 2025-07-04 18:02:23.066990 | orchestrator |  } 2025-07-04 18:02:23.067778 | orchestrator | } 2025-07-04 18:02:23.068130 | orchestrator | 2025-07-04 18:02:23.068692 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-04 18:02:23.069038 | orchestrator | Friday 04 July 2025 18:02:23 +0000 (0:00:00.234) 0:00:43.476 *********** 2025-07-04 18:02:24.117055 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-04 18:02:24.119624 | orchestrator | 2025-07-04 18:02:24.120513 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:02:24.120569 | orchestrator | 2025-07-04 18:02:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 18:02:24.120585 | orchestrator | 2025-07-04 18:02:24 | INFO  | Please wait and do not abort execution. 2025-07-04 18:02:24.122098 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-04 18:02:24.123662 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-04 18:02:24.125957 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-04 18:02:24.127031 | orchestrator | 2025-07-04 18:02:24.127078 | orchestrator | 2025-07-04 18:02:24.127928 | orchestrator | 2025-07-04 18:02:24.127983 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:02:24.128053 | orchestrator | Friday 04 July 2025 18:02:24 +0000 (0:00:01.065) 0:00:44.541 *********** 2025-07-04 18:02:24.128796 | orchestrator | =============================================================================== 2025-07-04 18:02:24.129281 | orchestrator | Write configuration file ------------------------------------------------ 4.60s 2025-07-04 18:02:24.129431 | orchestrator | Add known links to the list of available block devices ------------------ 1.29s 2025-07-04 18:02:24.130084 | orchestrator | Add known partitions to the list of available block devices ------------- 1.29s 2025-07-04 18:02:24.130691 | orchestrator | Get initial list of available block devices ----------------------------- 1.28s 2025-07-04 18:02:24.132049 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.04s 2025-07-04 18:02:24.133025 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2025-07-04 18:02:24.133638 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-07-04 18:02:24.133661 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-07-04 18:02:24.134681 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-07-04 18:02:24.134895 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.79s 2025-07-04 18:02:24.135887 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.77s 2025-07-04 18:02:24.136536 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-07-04 18:02:24.139044 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-07-04 18:02:24.139286 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-07-04 18:02:24.139312 | orchestrator | Print configuration data ------------------------------------------------ 0.69s 2025-07-04 18:02:24.139323 | orchestrator | Print DB devices -------------------------------------------------------- 0.68s 2025-07-04 18:02:24.139868 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-07-04 18:02:24.141759 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-07-04 18:02:24.143851 | orchestrator | Set WAL devices config data --------------------------------------------- 0.65s 2025-07-04 18:02:24.143966 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.63s 2025-07-04 18:02:36.754501 | orchestrator | Registering Redlock._acquired_script 2025-07-04 18:02:36.754622 | orchestrator | Registering Redlock._extend_script 2025-07-04 18:02:36.754641 | orchestrator | Registering Redlock._release_script 2025-07-04 18:02:36.814395 | orchestrator | 2025-07-04 18:02:36 | INFO  | Task 5079615f-06a2-4802-bb8e-e5a61708ce0a (sync inventory) is running in background. Output coming soon. 2025-07-04 18:02:56.032074 | orchestrator | 2025-07-04 18:02:38 | INFO  | Starting group_vars file reorganization 2025-07-04 18:02:56.032231 | orchestrator | 2025-07-04 18:02:38 | INFO  | Moved 0 file(s) to their respective directories 2025-07-04 18:02:56.032252 | orchestrator | 2025-07-04 18:02:38 | INFO  | Group_vars file reorganization completed 2025-07-04 18:02:56.032264 | orchestrator | 2025-07-04 18:02:40 | INFO  | Starting variable preparation from inventory 2025-07-04 18:02:56.032275 | orchestrator | 2025-07-04 18:02:41 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-07-04 18:02:56.032287 | orchestrator | 2025-07-04 18:02:41 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-07-04 18:02:56.032325 | orchestrator | 2025-07-04 18:02:41 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-07-04 18:02:56.032336 | orchestrator | 2025-07-04 18:02:41 | INFO  | 3 file(s) written, 6 host(s) processed 2025-07-04 18:02:56.032347 | orchestrator | 2025-07-04 18:02:41 | INFO  | Variable preparation completed: 2025-07-04 18:02:56.032358 | orchestrator | 2025-07-04 18:02:42 | INFO  | Starting inventory overwrite handling 2025-07-04 18:02:56.032369 | orchestrator | 2025-07-04 18:02:42 | INFO  | Handling group overwrites in 99-overwrite 2025-07-04 18:02:56.032380 | orchestrator | 2025-07-04 18:02:42 | INFO  | Removing group frr:children from 60-generic 2025-07-04 18:02:56.032391 | orchestrator | 2025-07-04 18:02:42 | INFO  | Removing group storage:children from 50-kolla 2025-07-04 18:02:56.032401 | orchestrator | 2025-07-04 18:02:42 | INFO  | Removing group netbird:children from 50-infrastruture 2025-07-04 18:02:56.032421 | orchestrator | 2025-07-04 18:02:42 | INFO  | Removing group ceph-mds from 50-ceph 2025-07-04 18:02:56.032433 | orchestrator | 2025-07-04 18:02:42 | INFO  | Removing group ceph-rgw from 50-ceph 2025-07-04 18:02:56.032444 | orchestrator | 2025-07-04 18:02:42 | INFO  | Handling group overwrites in 20-roles 2025-07-04 18:02:56.032454 | orchestrator | 2025-07-04 18:02:42 | INFO  | Removing group k3s_node from 50-infrastruture 2025-07-04 18:02:56.032466 | orchestrator | 2025-07-04 18:02:42 | INFO  | Removed 6 group(s) in total 2025-07-04 18:02:56.032477 | orchestrator | 2025-07-04 18:02:42 | INFO  | Inventory overwrite handling completed 2025-07-04 18:02:56.032487 | orchestrator | 2025-07-04 18:02:43 | INFO  | Starting merge of inventory files 2025-07-04 18:02:56.032498 | orchestrator | 2025-07-04 18:02:43 | INFO  | Inventory files merged successfully 2025-07-04 18:02:56.032508 | orchestrator | 2025-07-04 18:02:47 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-07-04 18:02:56.032519 | orchestrator | 2025-07-04 18:02:54 | INFO  | Successfully wrote ClusterShell configuration 2025-07-04 18:02:56.032530 | orchestrator | [master f45afc7] 2025-07-04-18-02 2025-07-04 18:02:56.032543 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-07-04 18:02:58.162671 | orchestrator | 2025-07-04 18:02:58 | INFO  | Task 90b5ad08-c9a7-442b-9149-824e412073ab (ceph-create-lvm-devices) was prepared for execution. 2025-07-04 18:02:58.162773 | orchestrator | 2025-07-04 18:02:58 | INFO  | It takes a moment until task 90b5ad08-c9a7-442b-9149-824e412073ab (ceph-create-lvm-devices) has been started and output is visible here. 2025-07-04 18:03:02.431348 | orchestrator | 2025-07-04 18:03:02.432613 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-04 18:03:02.435125 | orchestrator | 2025-07-04 18:03:02.435361 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-04 18:03:02.436438 | orchestrator | Friday 04 July 2025 18:03:02 +0000 (0:00:00.340) 0:00:00.340 *********** 2025-07-04 18:03:02.678407 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-04 18:03:02.679712 | orchestrator | 2025-07-04 18:03:02.681241 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-04 18:03:02.681546 | orchestrator | Friday 04 July 2025 18:03:02 +0000 (0:00:00.250) 0:00:00.590 *********** 2025-07-04 18:03:02.899457 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:03:02.900432 | orchestrator | 2025-07-04 18:03:02.901402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:02.902237 | orchestrator | Friday 04 July 2025 18:03:02 +0000 (0:00:00.221) 0:00:00.812 *********** 2025-07-04 18:03:03.313026 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-04 18:03:03.314107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-04 18:03:03.314147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-04 18:03:03.315093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-04 18:03:03.316087 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-04 18:03:03.316761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-04 18:03:03.317554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-04 18:03:03.318685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-04 18:03:03.319156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-04 18:03:03.319858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-04 18:03:03.320260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-04 18:03:03.320795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-04 18:03:03.321363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-04 18:03:03.321935 | orchestrator | 2025-07-04 18:03:03.322465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:03.322762 | orchestrator | Friday 04 July 2025 18:03:03 +0000 (0:00:00.412) 0:00:01.225 *********** 2025-07-04 18:03:03.783638 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:03.784592 | orchestrator | 2025-07-04 18:03:03.785735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:03.787056 | orchestrator | Friday 04 July 2025 18:03:03 +0000 (0:00:00.469) 0:00:01.695 *********** 2025-07-04 18:03:03.997400 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:03.998109 | orchestrator | 2025-07-04 18:03:04.000385 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:04.000435 | orchestrator | Friday 04 July 2025 18:03:03 +0000 (0:00:00.214) 0:00:01.910 *********** 2025-07-04 18:03:04.187667 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:04.188559 | orchestrator | 2025-07-04 18:03:04.191287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:04.192406 | orchestrator | Friday 04 July 2025 18:03:04 +0000 (0:00:00.190) 0:00:02.100 *********** 2025-07-04 18:03:04.404902 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:04.406233 | orchestrator | 2025-07-04 18:03:04.407762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:04.408349 | orchestrator | Friday 04 July 2025 18:03:04 +0000 (0:00:00.217) 0:00:02.318 *********** 2025-07-04 18:03:04.610792 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:04.611632 | orchestrator | 2025-07-04 18:03:04.612967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:04.613984 | orchestrator | Friday 04 July 2025 18:03:04 +0000 (0:00:00.204) 0:00:02.523 *********** 2025-07-04 18:03:04.806527 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:04.806819 | orchestrator | 2025-07-04 18:03:04.808335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:04.811268 | orchestrator | Friday 04 July 2025 18:03:04 +0000 (0:00:00.196) 0:00:02.719 *********** 2025-07-04 18:03:05.076956 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:05.077568 | orchestrator | 2025-07-04 18:03:05.078832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:05.080160 | orchestrator | Friday 04 July 2025 18:03:05 +0000 (0:00:00.271) 0:00:02.990 *********** 2025-07-04 18:03:05.273217 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:05.276645 | orchestrator | 2025-07-04 18:03:05.277830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:05.278449 | orchestrator | Friday 04 July 2025 18:03:05 +0000 (0:00:00.193) 0:00:03.184 *********** 2025-07-04 18:03:05.712323 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd) 2025-07-04 18:03:05.712784 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd) 2025-07-04 18:03:05.713556 | orchestrator | 2025-07-04 18:03:05.714938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:05.716023 | orchestrator | Friday 04 July 2025 18:03:05 +0000 (0:00:00.442) 0:00:03.626 *********** 2025-07-04 18:03:06.149127 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f1ee158f-8183-4691-b988-cdb0b3746d63) 2025-07-04 18:03:06.152505 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f1ee158f-8183-4691-b988-cdb0b3746d63) 2025-07-04 18:03:06.154824 | orchestrator | 2025-07-04 18:03:06.155916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:06.156595 | orchestrator | Friday 04 July 2025 18:03:06 +0000 (0:00:00.433) 0:00:04.059 *********** 2025-07-04 18:03:06.795335 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cc10544f-afe1-4b17-ac35-d479dbd44023) 2025-07-04 18:03:06.796504 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cc10544f-afe1-4b17-ac35-d479dbd44023) 2025-07-04 18:03:06.798078 | orchestrator | 2025-07-04 18:03:06.799163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:06.799952 | orchestrator | Friday 04 July 2025 18:03:06 +0000 (0:00:00.643) 0:00:04.703 *********** 2025-07-04 18:03:07.464423 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c678ea0e-f232-4db4-9458-94e4077f665f) 2025-07-04 18:03:07.464736 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c678ea0e-f232-4db4-9458-94e4077f665f) 2025-07-04 18:03:07.466442 | orchestrator | 2025-07-04 18:03:07.467094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:07.467701 | orchestrator | Friday 04 July 2025 18:03:07 +0000 (0:00:00.673) 0:00:05.376 *********** 2025-07-04 18:03:08.192541 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-04 18:03:08.195113 | orchestrator | 2025-07-04 18:03:08.195682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:08.196852 | orchestrator | Friday 04 July 2025 18:03:08 +0000 (0:00:00.727) 0:00:06.104 *********** 2025-07-04 18:03:08.606648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-04 18:03:08.608437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-04 18:03:08.609712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-04 18:03:08.611658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-04 18:03:08.612078 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-04 18:03:08.612793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-04 18:03:08.613550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-04 18:03:08.614392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-04 18:03:08.614745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-04 18:03:08.615348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-04 18:03:08.615849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-04 18:03:08.616730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-04 18:03:08.617207 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-04 18:03:08.617771 | orchestrator | 2025-07-04 18:03:08.618515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:08.618907 | orchestrator | Friday 04 July 2025 18:03:08 +0000 (0:00:00.416) 0:00:06.520 *********** 2025-07-04 18:03:08.807989 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:08.808545 | orchestrator | 2025-07-04 18:03:08.809662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:08.810556 | orchestrator | Friday 04 July 2025 18:03:08 +0000 (0:00:00.199) 0:00:06.719 *********** 2025-07-04 18:03:09.033905 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:09.035421 | orchestrator | 2025-07-04 18:03:09.036426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:09.037062 | orchestrator | Friday 04 July 2025 18:03:09 +0000 (0:00:00.226) 0:00:06.946 *********** 2025-07-04 18:03:09.247316 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:09.248256 | orchestrator | 2025-07-04 18:03:09.248624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:09.249526 | orchestrator | Friday 04 July 2025 18:03:09 +0000 (0:00:00.214) 0:00:07.160 *********** 2025-07-04 18:03:09.441389 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:09.441493 | orchestrator | 2025-07-04 18:03:09.441612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:09.442376 | orchestrator | Friday 04 July 2025 18:03:09 +0000 (0:00:00.194) 0:00:07.354 *********** 2025-07-04 18:03:09.640283 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:09.641452 | orchestrator | 2025-07-04 18:03:09.642321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:09.643398 | orchestrator | Friday 04 July 2025 18:03:09 +0000 (0:00:00.198) 0:00:07.553 *********** 2025-07-04 18:03:09.867530 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:09.867881 | orchestrator | 2025-07-04 18:03:09.868835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:09.870535 | orchestrator | Friday 04 July 2025 18:03:09 +0000 (0:00:00.227) 0:00:07.780 *********** 2025-07-04 18:03:10.074478 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:10.075404 | orchestrator | 2025-07-04 18:03:10.076749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:10.077738 | orchestrator | Friday 04 July 2025 18:03:10 +0000 (0:00:00.207) 0:00:07.987 *********** 2025-07-04 18:03:10.283635 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:10.283831 | orchestrator | 2025-07-04 18:03:10.284919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:10.285580 | orchestrator | Friday 04 July 2025 18:03:10 +0000 (0:00:00.208) 0:00:08.196 *********** 2025-07-04 18:03:11.349063 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-04 18:03:11.349854 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-04 18:03:11.351043 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-04 18:03:11.352251 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-04 18:03:11.353044 | orchestrator | 2025-07-04 18:03:11.353941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:11.355391 | orchestrator | Friday 04 July 2025 18:03:11 +0000 (0:00:01.063) 0:00:09.259 *********** 2025-07-04 18:03:11.554847 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:11.555445 | orchestrator | 2025-07-04 18:03:11.558372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:11.558412 | orchestrator | Friday 04 July 2025 18:03:11 +0000 (0:00:00.207) 0:00:09.466 *********** 2025-07-04 18:03:11.762404 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:11.763084 | orchestrator | 2025-07-04 18:03:11.764049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:11.765875 | orchestrator | Friday 04 July 2025 18:03:11 +0000 (0:00:00.205) 0:00:09.672 *********** 2025-07-04 18:03:11.966445 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:11.967723 | orchestrator | 2025-07-04 18:03:11.969253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:11.970003 | orchestrator | Friday 04 July 2025 18:03:11 +0000 (0:00:00.207) 0:00:09.879 *********** 2025-07-04 18:03:12.183526 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:12.185379 | orchestrator | 2025-07-04 18:03:12.186850 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-04 18:03:12.187792 | orchestrator | Friday 04 July 2025 18:03:12 +0000 (0:00:00.215) 0:00:10.095 *********** 2025-07-04 18:03:12.325804 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:12.327887 | orchestrator | 2025-07-04 18:03:12.329308 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-04 18:03:12.330462 | orchestrator | Friday 04 July 2025 18:03:12 +0000 (0:00:00.142) 0:00:10.238 *********** 2025-07-04 18:03:12.523643 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'}}) 2025-07-04 18:03:12.525755 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50c65579-7f86-5010-a824-2221e6b8d3f0'}}) 2025-07-04 18:03:12.525798 | orchestrator | 2025-07-04 18:03:12.525811 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-04 18:03:12.526069 | orchestrator | Friday 04 July 2025 18:03:12 +0000 (0:00:00.196) 0:00:10.434 *********** 2025-07-04 18:03:14.538262 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'}) 2025-07-04 18:03:14.539107 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'}) 2025-07-04 18:03:14.540018 | orchestrator | 2025-07-04 18:03:14.541480 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-04 18:03:14.542148 | orchestrator | Friday 04 July 2025 18:03:14 +0000 (0:00:02.014) 0:00:12.449 *********** 2025-07-04 18:03:14.707242 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:14.707500 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:14.709255 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:14.710870 | orchestrator | 2025-07-04 18:03:14.711197 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-04 18:03:14.711938 | orchestrator | Friday 04 July 2025 18:03:14 +0000 (0:00:00.170) 0:00:12.619 *********** 2025-07-04 18:03:16.104933 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'}) 2025-07-04 18:03:16.106966 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'}) 2025-07-04 18:03:16.107027 | orchestrator | 2025-07-04 18:03:16.108613 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-04 18:03:16.110505 | orchestrator | Friday 04 July 2025 18:03:16 +0000 (0:00:01.396) 0:00:14.016 *********** 2025-07-04 18:03:16.281144 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:16.282247 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:16.282762 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:16.283996 | orchestrator | 2025-07-04 18:03:16.284785 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-04 18:03:16.286303 | orchestrator | Friday 04 July 2025 18:03:16 +0000 (0:00:00.177) 0:00:14.193 *********** 2025-07-04 18:03:16.423424 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:16.424049 | orchestrator | 2025-07-04 18:03:16.425770 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-04 18:03:16.427047 | orchestrator | Friday 04 July 2025 18:03:16 +0000 (0:00:00.143) 0:00:14.336 *********** 2025-07-04 18:03:16.809536 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:16.810753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:16.811668 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:16.812671 | orchestrator | 2025-07-04 18:03:16.814064 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-04 18:03:16.814821 | orchestrator | Friday 04 July 2025 18:03:16 +0000 (0:00:00.384) 0:00:14.721 *********** 2025-07-04 18:03:16.955959 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:16.956286 | orchestrator | 2025-07-04 18:03:16.957295 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-04 18:03:16.959079 | orchestrator | Friday 04 July 2025 18:03:16 +0000 (0:00:00.147) 0:00:14.869 *********** 2025-07-04 18:03:17.111498 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:17.112552 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:17.113934 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:17.115038 | orchestrator | 2025-07-04 18:03:17.116920 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-04 18:03:17.117572 | orchestrator | Friday 04 July 2025 18:03:17 +0000 (0:00:00.155) 0:00:15.024 *********** 2025-07-04 18:03:17.243319 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:17.243932 | orchestrator | 2025-07-04 18:03:17.245464 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-04 18:03:17.247142 | orchestrator | Friday 04 July 2025 18:03:17 +0000 (0:00:00.131) 0:00:15.156 *********** 2025-07-04 18:03:17.406452 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:17.406560 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:17.407385 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:17.408302 | orchestrator | 2025-07-04 18:03:17.410535 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-04 18:03:17.410563 | orchestrator | Friday 04 July 2025 18:03:17 +0000 (0:00:00.163) 0:00:15.319 *********** 2025-07-04 18:03:17.552607 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:03:17.553372 | orchestrator | 2025-07-04 18:03:17.554079 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-04 18:03:17.554416 | orchestrator | Friday 04 July 2025 18:03:17 +0000 (0:00:00.146) 0:00:15.466 *********** 2025-07-04 18:03:17.713559 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:17.714533 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:17.715559 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:17.716571 | orchestrator | 2025-07-04 18:03:17.718129 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-04 18:03:17.719036 | orchestrator | Friday 04 July 2025 18:03:17 +0000 (0:00:00.159) 0:00:15.625 *********** 2025-07-04 18:03:17.866551 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:17.867156 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:17.868222 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:17.869122 | orchestrator | 2025-07-04 18:03:17.869893 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-04 18:03:17.871235 | orchestrator | Friday 04 July 2025 18:03:17 +0000 (0:00:00.154) 0:00:15.779 *********** 2025-07-04 18:03:18.025225 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:18.025410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:18.026193 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:18.027751 | orchestrator | 2025-07-04 18:03:18.028963 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-04 18:03:18.030699 | orchestrator | Friday 04 July 2025 18:03:18 +0000 (0:00:00.158) 0:00:15.938 *********** 2025-07-04 18:03:18.163449 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:18.163566 | orchestrator | 2025-07-04 18:03:18.164297 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-04 18:03:18.165351 | orchestrator | Friday 04 July 2025 18:03:18 +0000 (0:00:00.138) 0:00:16.076 *********** 2025-07-04 18:03:18.304877 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:18.305874 | orchestrator | 2025-07-04 18:03:18.307535 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-04 18:03:18.308253 | orchestrator | Friday 04 July 2025 18:03:18 +0000 (0:00:00.138) 0:00:16.215 *********** 2025-07-04 18:03:18.457933 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:18.458558 | orchestrator | 2025-07-04 18:03:18.459869 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-04 18:03:18.463364 | orchestrator | Friday 04 July 2025 18:03:18 +0000 (0:00:00.154) 0:00:16.370 *********** 2025-07-04 18:03:18.853478 | orchestrator | ok: [testbed-node-3] => { 2025-07-04 18:03:18.856292 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-04 18:03:18.856360 | orchestrator | } 2025-07-04 18:03:18.862481 | orchestrator | 2025-07-04 18:03:18.863233 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-04 18:03:18.864592 | orchestrator | Friday 04 July 2025 18:03:18 +0000 (0:00:00.393) 0:00:16.764 *********** 2025-07-04 18:03:19.003843 | orchestrator | ok: [testbed-node-3] => { 2025-07-04 18:03:19.005377 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-04 18:03:19.006659 | orchestrator | } 2025-07-04 18:03:19.007538 | orchestrator | 2025-07-04 18:03:19.008856 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-04 18:03:19.009475 | orchestrator | Friday 04 July 2025 18:03:18 +0000 (0:00:00.151) 0:00:16.916 *********** 2025-07-04 18:03:19.151543 | orchestrator | ok: [testbed-node-3] => { 2025-07-04 18:03:19.152856 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-04 18:03:19.154965 | orchestrator | } 2025-07-04 18:03:19.156152 | orchestrator | 2025-07-04 18:03:19.157351 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-04 18:03:19.158353 | orchestrator | Friday 04 July 2025 18:03:19 +0000 (0:00:00.147) 0:00:17.064 *********** 2025-07-04 18:03:19.806141 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:03:19.806323 | orchestrator | 2025-07-04 18:03:19.806894 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-04 18:03:19.808487 | orchestrator | Friday 04 July 2025 18:03:19 +0000 (0:00:00.654) 0:00:17.718 *********** 2025-07-04 18:03:20.288391 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:03:20.288626 | orchestrator | 2025-07-04 18:03:20.289755 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-04 18:03:20.291882 | orchestrator | Friday 04 July 2025 18:03:20 +0000 (0:00:00.482) 0:00:18.200 *********** 2025-07-04 18:03:20.803766 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:03:20.803983 | orchestrator | 2025-07-04 18:03:20.804389 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-04 18:03:20.804749 | orchestrator | Friday 04 July 2025 18:03:20 +0000 (0:00:00.515) 0:00:18.716 *********** 2025-07-04 18:03:20.959640 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:03:20.960337 | orchestrator | 2025-07-04 18:03:20.961379 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-04 18:03:20.963949 | orchestrator | Friday 04 July 2025 18:03:20 +0000 (0:00:00.157) 0:00:18.873 *********** 2025-07-04 18:03:21.084855 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:21.085058 | orchestrator | 2025-07-04 18:03:21.086248 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-04 18:03:21.087056 | orchestrator | Friday 04 July 2025 18:03:21 +0000 (0:00:00.125) 0:00:18.998 *********** 2025-07-04 18:03:21.219534 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:21.219702 | orchestrator | 2025-07-04 18:03:21.220501 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-04 18:03:21.221019 | orchestrator | Friday 04 July 2025 18:03:21 +0000 (0:00:00.133) 0:00:19.131 *********** 2025-07-04 18:03:21.364525 | orchestrator | ok: [testbed-node-3] => { 2025-07-04 18:03:21.365104 | orchestrator |  "vgs_report": { 2025-07-04 18:03:21.366835 | orchestrator |  "vg": [] 2025-07-04 18:03:21.366998 | orchestrator |  } 2025-07-04 18:03:21.367759 | orchestrator | } 2025-07-04 18:03:21.368413 | orchestrator | 2025-07-04 18:03:21.368909 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-04 18:03:21.369433 | orchestrator | Friday 04 July 2025 18:03:21 +0000 (0:00:00.144) 0:00:19.276 *********** 2025-07-04 18:03:21.513703 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:21.514105 | orchestrator | 2025-07-04 18:03:21.514660 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-04 18:03:21.515977 | orchestrator | Friday 04 July 2025 18:03:21 +0000 (0:00:00.149) 0:00:19.425 *********** 2025-07-04 18:03:21.664712 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:21.665118 | orchestrator | 2025-07-04 18:03:21.667586 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-04 18:03:21.668187 | orchestrator | Friday 04 July 2025 18:03:21 +0000 (0:00:00.150) 0:00:19.576 *********** 2025-07-04 18:03:22.013583 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:22.014982 | orchestrator | 2025-07-04 18:03:22.015650 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-04 18:03:22.016971 | orchestrator | Friday 04 July 2025 18:03:22 +0000 (0:00:00.350) 0:00:19.926 *********** 2025-07-04 18:03:22.156068 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:22.156734 | orchestrator | 2025-07-04 18:03:22.157343 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-04 18:03:22.158104 | orchestrator | Friday 04 July 2025 18:03:22 +0000 (0:00:00.143) 0:00:20.070 *********** 2025-07-04 18:03:22.310470 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:22.311508 | orchestrator | 2025-07-04 18:03:22.312375 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-04 18:03:22.313140 | orchestrator | Friday 04 July 2025 18:03:22 +0000 (0:00:00.152) 0:00:20.222 *********** 2025-07-04 18:03:22.446379 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:22.447402 | orchestrator | 2025-07-04 18:03:22.449066 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-04 18:03:22.450620 | orchestrator | Friday 04 July 2025 18:03:22 +0000 (0:00:00.136) 0:00:20.359 *********** 2025-07-04 18:03:22.590763 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:22.590938 | orchestrator | 2025-07-04 18:03:22.591770 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-04 18:03:22.592347 | orchestrator | Friday 04 July 2025 18:03:22 +0000 (0:00:00.145) 0:00:20.504 *********** 2025-07-04 18:03:22.736003 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:22.737174 | orchestrator | 2025-07-04 18:03:22.737708 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-04 18:03:22.738939 | orchestrator | Friday 04 July 2025 18:03:22 +0000 (0:00:00.143) 0:00:20.648 *********** 2025-07-04 18:03:22.870989 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:22.872000 | orchestrator | 2025-07-04 18:03:22.872868 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-04 18:03:22.874402 | orchestrator | Friday 04 July 2025 18:03:22 +0000 (0:00:00.136) 0:00:20.784 *********** 2025-07-04 18:03:23.006446 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:23.006895 | orchestrator | 2025-07-04 18:03:23.008534 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-04 18:03:23.011534 | orchestrator | Friday 04 July 2025 18:03:22 +0000 (0:00:00.134) 0:00:20.919 *********** 2025-07-04 18:03:23.139797 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:23.140597 | orchestrator | 2025-07-04 18:03:23.141918 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-04 18:03:23.143521 | orchestrator | Friday 04 July 2025 18:03:23 +0000 (0:00:00.133) 0:00:21.053 *********** 2025-07-04 18:03:23.273651 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:23.274446 | orchestrator | 2025-07-04 18:03:23.276370 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-04 18:03:23.277289 | orchestrator | Friday 04 July 2025 18:03:23 +0000 (0:00:00.133) 0:00:21.186 *********** 2025-07-04 18:03:23.413553 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:23.415821 | orchestrator | 2025-07-04 18:03:23.415871 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-04 18:03:23.417214 | orchestrator | Friday 04 July 2025 18:03:23 +0000 (0:00:00.137) 0:00:21.324 *********** 2025-07-04 18:03:23.558800 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:23.559540 | orchestrator | 2025-07-04 18:03:23.560687 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-04 18:03:23.561582 | orchestrator | Friday 04 July 2025 18:03:23 +0000 (0:00:00.147) 0:00:21.472 *********** 2025-07-04 18:03:23.713525 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:23.715012 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:23.716719 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:23.717703 | orchestrator | 2025-07-04 18:03:23.718264 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-04 18:03:23.718734 | orchestrator | Friday 04 July 2025 18:03:23 +0000 (0:00:00.152) 0:00:21.625 *********** 2025-07-04 18:03:24.079585 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:24.080084 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:24.081082 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:24.082281 | orchestrator | 2025-07-04 18:03:24.083502 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-04 18:03:24.084744 | orchestrator | Friday 04 July 2025 18:03:24 +0000 (0:00:00.367) 0:00:21.992 *********** 2025-07-04 18:03:24.239875 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:24.240825 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:24.241349 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:24.242833 | orchestrator | 2025-07-04 18:03:24.243802 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-04 18:03:24.244377 | orchestrator | Friday 04 July 2025 18:03:24 +0000 (0:00:00.160) 0:00:22.153 *********** 2025-07-04 18:03:24.423434 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:24.424435 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:24.426199 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:24.427361 | orchestrator | 2025-07-04 18:03:24.428749 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-04 18:03:24.429800 | orchestrator | Friday 04 July 2025 18:03:24 +0000 (0:00:00.181) 0:00:22.335 *********** 2025-07-04 18:03:24.614810 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:24.615032 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:24.616176 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:24.617341 | orchestrator | 2025-07-04 18:03:24.618511 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-04 18:03:24.619765 | orchestrator | Friday 04 July 2025 18:03:24 +0000 (0:00:00.190) 0:00:22.526 *********** 2025-07-04 18:03:24.781338 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:24.782362 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:24.784317 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:24.785389 | orchestrator | 2025-07-04 18:03:24.786596 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-04 18:03:24.787381 | orchestrator | Friday 04 July 2025 18:03:24 +0000 (0:00:00.166) 0:00:22.692 *********** 2025-07-04 18:03:24.942443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:24.942828 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:24.944097 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:24.945238 | orchestrator | 2025-07-04 18:03:24.946652 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-04 18:03:24.947042 | orchestrator | Friday 04 July 2025 18:03:24 +0000 (0:00:00.163) 0:00:22.856 *********** 2025-07-04 18:03:25.095201 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:25.095644 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:25.096115 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:25.097037 | orchestrator | 2025-07-04 18:03:25.100004 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-04 18:03:25.100077 | orchestrator | Friday 04 July 2025 18:03:25 +0000 (0:00:00.152) 0:00:23.008 *********** 2025-07-04 18:03:25.614676 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:03:25.615326 | orchestrator | 2025-07-04 18:03:25.616740 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-04 18:03:25.618232 | orchestrator | Friday 04 July 2025 18:03:25 +0000 (0:00:00.518) 0:00:23.527 *********** 2025-07-04 18:03:26.108900 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:03:26.109782 | orchestrator | 2025-07-04 18:03:26.110375 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-04 18:03:26.110969 | orchestrator | Friday 04 July 2025 18:03:26 +0000 (0:00:00.493) 0:00:24.020 *********** 2025-07-04 18:03:26.254313 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:03:26.254879 | orchestrator | 2025-07-04 18:03:26.255600 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-04 18:03:26.257451 | orchestrator | Friday 04 July 2025 18:03:26 +0000 (0:00:00.146) 0:00:24.167 *********** 2025-07-04 18:03:26.448555 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'vg_name': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'}) 2025-07-04 18:03:26.449950 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'vg_name': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'}) 2025-07-04 18:03:26.450281 | orchestrator | 2025-07-04 18:03:26.451324 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-04 18:03:26.452626 | orchestrator | Friday 04 July 2025 18:03:26 +0000 (0:00:00.194) 0:00:24.362 *********** 2025-07-04 18:03:26.676908 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:26.678492 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:26.678899 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:26.679994 | orchestrator | 2025-07-04 18:03:26.681813 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-04 18:03:26.682289 | orchestrator | Friday 04 July 2025 18:03:26 +0000 (0:00:00.227) 0:00:24.589 *********** 2025-07-04 18:03:27.136910 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:27.139495 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:27.139565 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:27.139837 | orchestrator | 2025-07-04 18:03:27.140963 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-04 18:03:27.142502 | orchestrator | Friday 04 July 2025 18:03:27 +0000 (0:00:00.459) 0:00:25.049 *********** 2025-07-04 18:03:27.306344 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'})  2025-07-04 18:03:27.307753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'})  2025-07-04 18:03:27.307997 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:03:27.310895 | orchestrator | 2025-07-04 18:03:27.310975 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-04 18:03:27.311820 | orchestrator | Friday 04 July 2025 18:03:27 +0000 (0:00:00.170) 0:00:25.219 *********** 2025-07-04 18:03:27.665616 | orchestrator | ok: [testbed-node-3] => { 2025-07-04 18:03:27.666076 | orchestrator |  "lvm_report": { 2025-07-04 18:03:27.669007 | orchestrator |  "lv": [ 2025-07-04 18:03:27.669611 | orchestrator |  { 2025-07-04 18:03:27.671712 | orchestrator |  "lv_name": "osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36", 2025-07-04 18:03:27.674402 | orchestrator |  "vg_name": "ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36" 2025-07-04 18:03:27.674441 | orchestrator |  }, 2025-07-04 18:03:27.676527 | orchestrator |  { 2025-07-04 18:03:27.676582 | orchestrator |  "lv_name": "osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0", 2025-07-04 18:03:27.680286 | orchestrator |  "vg_name": "ceph-50c65579-7f86-5010-a824-2221e6b8d3f0" 2025-07-04 18:03:27.680318 | orchestrator |  } 2025-07-04 18:03:27.681970 | orchestrator |  ], 2025-07-04 18:03:27.682708 | orchestrator |  "pv": [ 2025-07-04 18:03:27.683448 | orchestrator |  { 2025-07-04 18:03:27.684099 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-04 18:03:27.685044 | orchestrator |  "vg_name": "ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36" 2025-07-04 18:03:27.687148 | orchestrator |  }, 2025-07-04 18:03:27.687217 | orchestrator |  { 2025-07-04 18:03:27.688006 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-04 18:03:27.688704 | orchestrator |  "vg_name": "ceph-50c65579-7f86-5010-a824-2221e6b8d3f0" 2025-07-04 18:03:27.689996 | orchestrator |  } 2025-07-04 18:03:27.691011 | orchestrator |  ] 2025-07-04 18:03:27.691827 | orchestrator |  } 2025-07-04 18:03:27.692474 | orchestrator | } 2025-07-04 18:03:27.693039 | orchestrator | 2025-07-04 18:03:27.693764 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-04 18:03:27.695731 | orchestrator | 2025-07-04 18:03:27.697623 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-04 18:03:27.698824 | orchestrator | Friday 04 July 2025 18:03:27 +0000 (0:00:00.359) 0:00:25.578 *********** 2025-07-04 18:03:27.922646 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-04 18:03:27.922749 | orchestrator | 2025-07-04 18:03:27.923873 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-04 18:03:27.924900 | orchestrator | Friday 04 July 2025 18:03:27 +0000 (0:00:00.253) 0:00:25.831 *********** 2025-07-04 18:03:28.172099 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:03:28.172349 | orchestrator | 2025-07-04 18:03:28.173792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:28.173983 | orchestrator | Friday 04 July 2025 18:03:28 +0000 (0:00:00.254) 0:00:26.085 *********** 2025-07-04 18:03:28.597694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-04 18:03:28.598195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-04 18:03:28.599399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-04 18:03:28.600174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-04 18:03:28.600937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-04 18:03:28.602249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-04 18:03:28.604692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-04 18:03:28.605807 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-04 18:03:28.605886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-04 18:03:28.606839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-04 18:03:28.607258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-04 18:03:28.608048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-04 18:03:28.608595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-04 18:03:28.608986 | orchestrator | 2025-07-04 18:03:28.609506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:28.610439 | orchestrator | Friday 04 July 2025 18:03:28 +0000 (0:00:00.425) 0:00:26.511 *********** 2025-07-04 18:03:28.794118 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:28.794441 | orchestrator | 2025-07-04 18:03:28.795568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:28.796019 | orchestrator | Friday 04 July 2025 18:03:28 +0000 (0:00:00.195) 0:00:26.706 *********** 2025-07-04 18:03:29.000397 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:29.002102 | orchestrator | 2025-07-04 18:03:29.003904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:29.004816 | orchestrator | Friday 04 July 2025 18:03:28 +0000 (0:00:00.205) 0:00:26.912 *********** 2025-07-04 18:03:29.184628 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:29.184729 | orchestrator | 2025-07-04 18:03:29.185430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:29.186712 | orchestrator | Friday 04 July 2025 18:03:29 +0000 (0:00:00.183) 0:00:27.096 *********** 2025-07-04 18:03:29.811490 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:29.813305 | orchestrator | 2025-07-04 18:03:29.813479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:29.815647 | orchestrator | Friday 04 July 2025 18:03:29 +0000 (0:00:00.628) 0:00:27.724 *********** 2025-07-04 18:03:30.022720 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:30.023501 | orchestrator | 2025-07-04 18:03:30.025731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:30.026120 | orchestrator | Friday 04 July 2025 18:03:30 +0000 (0:00:00.210) 0:00:27.935 *********** 2025-07-04 18:03:30.229524 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:30.231371 | orchestrator | 2025-07-04 18:03:30.233074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:30.233999 | orchestrator | Friday 04 July 2025 18:03:30 +0000 (0:00:00.207) 0:00:28.142 *********** 2025-07-04 18:03:30.438337 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:30.438769 | orchestrator | 2025-07-04 18:03:30.440131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:30.441216 | orchestrator | Friday 04 July 2025 18:03:30 +0000 (0:00:00.208) 0:00:28.350 *********** 2025-07-04 18:03:30.638484 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:30.639513 | orchestrator | 2025-07-04 18:03:30.640556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:30.641455 | orchestrator | Friday 04 July 2025 18:03:30 +0000 (0:00:00.201) 0:00:28.552 *********** 2025-07-04 18:03:31.062331 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e) 2025-07-04 18:03:31.062727 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e) 2025-07-04 18:03:31.063786 | orchestrator | 2025-07-04 18:03:31.065391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:31.065730 | orchestrator | Friday 04 July 2025 18:03:31 +0000 (0:00:00.421) 0:00:28.973 *********** 2025-07-04 18:03:31.485090 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_22af1316-5bc1-4af9-ac7a-65db3b57cabb) 2025-07-04 18:03:31.486969 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_22af1316-5bc1-4af9-ac7a-65db3b57cabb) 2025-07-04 18:03:31.487091 | orchestrator | 2025-07-04 18:03:31.488110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:31.488909 | orchestrator | Friday 04 July 2025 18:03:31 +0000 (0:00:00.424) 0:00:29.398 *********** 2025-07-04 18:03:31.924551 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f2e9dc75-50de-4afc-bb89-e69d1400c858) 2025-07-04 18:03:31.924782 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f2e9dc75-50de-4afc-bb89-e69d1400c858) 2025-07-04 18:03:31.925266 | orchestrator | 2025-07-04 18:03:31.925870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:31.927535 | orchestrator | Friday 04 July 2025 18:03:31 +0000 (0:00:00.437) 0:00:29.836 *********** 2025-07-04 18:03:32.378406 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9dcda133-58d2-4853-8afe-c4a876875c80) 2025-07-04 18:03:32.378517 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9dcda133-58d2-4853-8afe-c4a876875c80) 2025-07-04 18:03:32.380014 | orchestrator | 2025-07-04 18:03:32.383015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:32.384008 | orchestrator | Friday 04 July 2025 18:03:32 +0000 (0:00:00.454) 0:00:30.291 *********** 2025-07-04 18:03:32.726515 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-04 18:03:32.727230 | orchestrator | 2025-07-04 18:03:32.728185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:32.729553 | orchestrator | Friday 04 July 2025 18:03:32 +0000 (0:00:00.348) 0:00:30.639 *********** 2025-07-04 18:03:33.360249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-04 18:03:33.361189 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-04 18:03:33.362425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-04 18:03:33.364193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-04 18:03:33.364987 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-04 18:03:33.366087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-04 18:03:33.366982 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-04 18:03:33.367686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-04 18:03:33.368381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-04 18:03:33.369192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-04 18:03:33.370131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-04 18:03:33.370807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-04 18:03:33.371248 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-04 18:03:33.371850 | orchestrator | 2025-07-04 18:03:33.372403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:33.373181 | orchestrator | Friday 04 July 2025 18:03:33 +0000 (0:00:00.632) 0:00:31.272 *********** 2025-07-04 18:03:33.589335 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:33.590407 | orchestrator | 2025-07-04 18:03:33.591436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:33.592066 | orchestrator | Friday 04 July 2025 18:03:33 +0000 (0:00:00.231) 0:00:31.503 *********** 2025-07-04 18:03:33.843683 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:33.844686 | orchestrator | 2025-07-04 18:03:33.845666 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:33.846592 | orchestrator | Friday 04 July 2025 18:03:33 +0000 (0:00:00.254) 0:00:31.757 *********** 2025-07-04 18:03:34.044452 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:34.045469 | orchestrator | 2025-07-04 18:03:34.046556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:34.047252 | orchestrator | Friday 04 July 2025 18:03:34 +0000 (0:00:00.198) 0:00:31.955 *********** 2025-07-04 18:03:34.246938 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:34.247763 | orchestrator | 2025-07-04 18:03:34.249114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:34.249816 | orchestrator | Friday 04 July 2025 18:03:34 +0000 (0:00:00.204) 0:00:32.160 *********** 2025-07-04 18:03:34.483535 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:34.484241 | orchestrator | 2025-07-04 18:03:34.485585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:34.486614 | orchestrator | Friday 04 July 2025 18:03:34 +0000 (0:00:00.236) 0:00:32.396 *********** 2025-07-04 18:03:34.703407 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:34.703923 | orchestrator | 2025-07-04 18:03:34.705321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:34.707273 | orchestrator | Friday 04 July 2025 18:03:34 +0000 (0:00:00.218) 0:00:32.615 *********** 2025-07-04 18:03:34.923526 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:34.924029 | orchestrator | 2025-07-04 18:03:34.925695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:34.927236 | orchestrator | Friday 04 July 2025 18:03:34 +0000 (0:00:00.220) 0:00:32.835 *********** 2025-07-04 18:03:35.139049 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:35.140016 | orchestrator | 2025-07-04 18:03:35.141342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:35.141375 | orchestrator | Friday 04 July 2025 18:03:35 +0000 (0:00:00.214) 0:00:33.049 *********** 2025-07-04 18:03:36.037675 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-04 18:03:36.041844 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-04 18:03:36.041881 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-04 18:03:36.041893 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-04 18:03:36.041947 | orchestrator | 2025-07-04 18:03:36.044372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:36.044427 | orchestrator | Friday 04 July 2025 18:03:36 +0000 (0:00:00.900) 0:00:33.950 *********** 2025-07-04 18:03:36.256215 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:36.257781 | orchestrator | 2025-07-04 18:03:36.257833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:36.257893 | orchestrator | Friday 04 July 2025 18:03:36 +0000 (0:00:00.217) 0:00:34.168 *********** 2025-07-04 18:03:36.475302 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:36.476359 | orchestrator | 2025-07-04 18:03:36.476661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:36.477620 | orchestrator | Friday 04 July 2025 18:03:36 +0000 (0:00:00.220) 0:00:34.388 *********** 2025-07-04 18:03:37.146297 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:37.147212 | orchestrator | 2025-07-04 18:03:37.148792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:37.149527 | orchestrator | Friday 04 July 2025 18:03:37 +0000 (0:00:00.669) 0:00:35.057 *********** 2025-07-04 18:03:37.379715 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:37.380581 | orchestrator | 2025-07-04 18:03:37.381510 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-04 18:03:37.382077 | orchestrator | Friday 04 July 2025 18:03:37 +0000 (0:00:00.235) 0:00:35.293 *********** 2025-07-04 18:03:37.526306 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:37.526856 | orchestrator | 2025-07-04 18:03:37.527738 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-04 18:03:37.528366 | orchestrator | Friday 04 July 2025 18:03:37 +0000 (0:00:00.146) 0:00:35.439 *********** 2025-07-04 18:03:37.729522 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0c11b362-ac03-5009-be6f-11a9ef5f18dc'}}) 2025-07-04 18:03:37.730558 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b396848d-3790-5c5a-8f8a-1e47b4270a43'}}) 2025-07-04 18:03:37.731666 | orchestrator | 2025-07-04 18:03:37.733693 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-04 18:03:37.733807 | orchestrator | Friday 04 July 2025 18:03:37 +0000 (0:00:00.203) 0:00:35.643 *********** 2025-07-04 18:03:39.570654 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'}) 2025-07-04 18:03:39.575454 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'}) 2025-07-04 18:03:39.576012 | orchestrator | 2025-07-04 18:03:39.577253 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-04 18:03:39.577662 | orchestrator | Friday 04 July 2025 18:03:39 +0000 (0:00:01.839) 0:00:37.482 *********** 2025-07-04 18:03:39.730655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:39.730762 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:39.731436 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:39.731994 | orchestrator | 2025-07-04 18:03:39.732703 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-04 18:03:39.733109 | orchestrator | Friday 04 July 2025 18:03:39 +0000 (0:00:00.161) 0:00:37.643 *********** 2025-07-04 18:03:41.017355 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'}) 2025-07-04 18:03:41.018473 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'}) 2025-07-04 18:03:41.020318 | orchestrator | 2025-07-04 18:03:41.021274 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-04 18:03:41.021900 | orchestrator | Friday 04 July 2025 18:03:41 +0000 (0:00:01.284) 0:00:38.928 *********** 2025-07-04 18:03:41.191815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:41.192755 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:41.193211 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:41.194581 | orchestrator | 2025-07-04 18:03:41.196184 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-04 18:03:41.196211 | orchestrator | Friday 04 July 2025 18:03:41 +0000 (0:00:00.176) 0:00:39.105 *********** 2025-07-04 18:03:41.334329 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:41.335529 | orchestrator | 2025-07-04 18:03:41.336697 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-04 18:03:41.337741 | orchestrator | Friday 04 July 2025 18:03:41 +0000 (0:00:00.142) 0:00:39.247 *********** 2025-07-04 18:03:41.500542 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:41.500748 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:41.502702 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:41.503345 | orchestrator | 2025-07-04 18:03:41.504989 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-04 18:03:41.505889 | orchestrator | Friday 04 July 2025 18:03:41 +0000 (0:00:00.165) 0:00:39.413 *********** 2025-07-04 18:03:41.641245 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:41.642216 | orchestrator | 2025-07-04 18:03:41.643488 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-04 18:03:41.644414 | orchestrator | Friday 04 July 2025 18:03:41 +0000 (0:00:00.141) 0:00:39.554 *********** 2025-07-04 18:03:41.805411 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:41.806124 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:41.806287 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:41.807425 | orchestrator | 2025-07-04 18:03:41.808199 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-04 18:03:41.809014 | orchestrator | Friday 04 July 2025 18:03:41 +0000 (0:00:00.160) 0:00:39.715 *********** 2025-07-04 18:03:42.151011 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:42.152091 | orchestrator | 2025-07-04 18:03:42.152422 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-04 18:03:42.153724 | orchestrator | Friday 04 July 2025 18:03:42 +0000 (0:00:00.348) 0:00:40.064 *********** 2025-07-04 18:03:42.320629 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:42.321395 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:42.322222 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:42.323209 | orchestrator | 2025-07-04 18:03:42.324266 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-04 18:03:42.325135 | orchestrator | Friday 04 July 2025 18:03:42 +0000 (0:00:00.170) 0:00:40.234 *********** 2025-07-04 18:03:42.469667 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:03:42.471673 | orchestrator | 2025-07-04 18:03:42.471942 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-04 18:03:42.473349 | orchestrator | Friday 04 July 2025 18:03:42 +0000 (0:00:00.148) 0:00:40.382 *********** 2025-07-04 18:03:42.629211 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:42.629789 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:42.631009 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:42.632098 | orchestrator | 2025-07-04 18:03:42.632856 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-04 18:03:42.633909 | orchestrator | Friday 04 July 2025 18:03:42 +0000 (0:00:00.158) 0:00:40.541 *********** 2025-07-04 18:03:42.799715 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:42.799821 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:42.800279 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:42.801498 | orchestrator | 2025-07-04 18:03:42.802962 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-04 18:03:42.803514 | orchestrator | Friday 04 July 2025 18:03:42 +0000 (0:00:00.166) 0:00:40.708 *********** 2025-07-04 18:03:42.947562 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:42.948603 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:42.950242 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:42.952075 | orchestrator | 2025-07-04 18:03:42.953454 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-04 18:03:42.954579 | orchestrator | Friday 04 July 2025 18:03:42 +0000 (0:00:00.152) 0:00:40.860 *********** 2025-07-04 18:03:43.092384 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:43.094010 | orchestrator | 2025-07-04 18:03:43.095478 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-04 18:03:43.096319 | orchestrator | Friday 04 July 2025 18:03:43 +0000 (0:00:00.143) 0:00:41.004 *********** 2025-07-04 18:03:43.222272 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:43.223428 | orchestrator | 2025-07-04 18:03:43.224855 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-04 18:03:43.226371 | orchestrator | Friday 04 July 2025 18:03:43 +0000 (0:00:00.131) 0:00:41.135 *********** 2025-07-04 18:03:43.385495 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:43.385596 | orchestrator | 2025-07-04 18:03:43.386740 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-04 18:03:43.386777 | orchestrator | Friday 04 July 2025 18:03:43 +0000 (0:00:00.162) 0:00:41.298 *********** 2025-07-04 18:03:43.538938 | orchestrator | ok: [testbed-node-4] => { 2025-07-04 18:03:43.539527 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-04 18:03:43.540833 | orchestrator | } 2025-07-04 18:03:43.541692 | orchestrator | 2025-07-04 18:03:43.543130 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-04 18:03:43.543971 | orchestrator | Friday 04 July 2025 18:03:43 +0000 (0:00:00.153) 0:00:41.452 *********** 2025-07-04 18:03:43.686778 | orchestrator | ok: [testbed-node-4] => { 2025-07-04 18:03:43.687508 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-04 18:03:43.689476 | orchestrator | } 2025-07-04 18:03:43.690638 | orchestrator | 2025-07-04 18:03:43.691351 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-04 18:03:43.692475 | orchestrator | Friday 04 July 2025 18:03:43 +0000 (0:00:00.148) 0:00:41.600 *********** 2025-07-04 18:03:43.837868 | orchestrator | ok: [testbed-node-4] => { 2025-07-04 18:03:43.839773 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-04 18:03:43.840018 | orchestrator | } 2025-07-04 18:03:43.840865 | orchestrator | 2025-07-04 18:03:43.841752 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-04 18:03:43.842249 | orchestrator | Friday 04 July 2025 18:03:43 +0000 (0:00:00.148) 0:00:41.749 *********** 2025-07-04 18:03:44.556774 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:03:44.557694 | orchestrator | 2025-07-04 18:03:44.559738 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-04 18:03:44.560836 | orchestrator | Friday 04 July 2025 18:03:44 +0000 (0:00:00.721) 0:00:42.470 *********** 2025-07-04 18:03:45.106617 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:03:45.106751 | orchestrator | 2025-07-04 18:03:45.107363 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-04 18:03:45.108210 | orchestrator | Friday 04 July 2025 18:03:45 +0000 (0:00:00.548) 0:00:43.019 *********** 2025-07-04 18:03:45.639947 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:03:45.640363 | orchestrator | 2025-07-04 18:03:45.640921 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-04 18:03:45.641332 | orchestrator | Friday 04 July 2025 18:03:45 +0000 (0:00:00.535) 0:00:43.554 *********** 2025-07-04 18:03:45.803942 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:03:45.805840 | orchestrator | 2025-07-04 18:03:45.807119 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-04 18:03:45.808376 | orchestrator | Friday 04 July 2025 18:03:45 +0000 (0:00:00.161) 0:00:43.716 *********** 2025-07-04 18:03:45.914398 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:45.915599 | orchestrator | 2025-07-04 18:03:45.917100 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-04 18:03:45.917978 | orchestrator | Friday 04 July 2025 18:03:45 +0000 (0:00:00.111) 0:00:43.827 *********** 2025-07-04 18:03:46.025551 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:46.025742 | orchestrator | 2025-07-04 18:03:46.028447 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-04 18:03:46.029845 | orchestrator | Friday 04 July 2025 18:03:46 +0000 (0:00:00.111) 0:00:43.938 *********** 2025-07-04 18:03:46.172209 | orchestrator | ok: [testbed-node-4] => { 2025-07-04 18:03:46.172435 | orchestrator |  "vgs_report": { 2025-07-04 18:03:46.173856 | orchestrator |  "vg": [] 2025-07-04 18:03:46.174994 | orchestrator |  } 2025-07-04 18:03:46.176250 | orchestrator | } 2025-07-04 18:03:46.176771 | orchestrator | 2025-07-04 18:03:46.177390 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-04 18:03:46.178311 | orchestrator | Friday 04 July 2025 18:03:46 +0000 (0:00:00.144) 0:00:44.083 *********** 2025-07-04 18:03:46.316960 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:46.317063 | orchestrator | 2025-07-04 18:03:46.317077 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-04 18:03:46.317380 | orchestrator | Friday 04 July 2025 18:03:46 +0000 (0:00:00.146) 0:00:44.230 *********** 2025-07-04 18:03:46.468272 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:46.470068 | orchestrator | 2025-07-04 18:03:46.471224 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-04 18:03:46.473229 | orchestrator | Friday 04 July 2025 18:03:46 +0000 (0:00:00.150) 0:00:44.381 *********** 2025-07-04 18:03:46.595340 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:46.596522 | orchestrator | 2025-07-04 18:03:46.597112 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-04 18:03:46.598253 | orchestrator | Friday 04 July 2025 18:03:46 +0000 (0:00:00.127) 0:00:44.508 *********** 2025-07-04 18:03:46.750294 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:46.750888 | orchestrator | 2025-07-04 18:03:46.752955 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-04 18:03:46.753108 | orchestrator | Friday 04 July 2025 18:03:46 +0000 (0:00:00.155) 0:00:44.663 *********** 2025-07-04 18:03:46.885719 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:46.885902 | orchestrator | 2025-07-04 18:03:46.887303 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-04 18:03:46.888188 | orchestrator | Friday 04 July 2025 18:03:46 +0000 (0:00:00.132) 0:00:44.796 *********** 2025-07-04 18:03:47.232003 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:47.232121 | orchestrator | 2025-07-04 18:03:47.232970 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-04 18:03:47.233625 | orchestrator | Friday 04 July 2025 18:03:47 +0000 (0:00:00.350) 0:00:45.146 *********** 2025-07-04 18:03:47.381536 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:47.386231 | orchestrator | 2025-07-04 18:03:47.387704 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-04 18:03:47.390007 | orchestrator | Friday 04 July 2025 18:03:47 +0000 (0:00:00.146) 0:00:45.292 *********** 2025-07-04 18:03:47.523740 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:47.524726 | orchestrator | 2025-07-04 18:03:47.526287 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-04 18:03:47.527259 | orchestrator | Friday 04 July 2025 18:03:47 +0000 (0:00:00.144) 0:00:45.437 *********** 2025-07-04 18:03:47.661411 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:47.662481 | orchestrator | 2025-07-04 18:03:47.663477 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-04 18:03:47.664276 | orchestrator | Friday 04 July 2025 18:03:47 +0000 (0:00:00.137) 0:00:45.574 *********** 2025-07-04 18:03:47.803171 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:47.804025 | orchestrator | 2025-07-04 18:03:47.805952 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-04 18:03:47.806774 | orchestrator | Friday 04 July 2025 18:03:47 +0000 (0:00:00.141) 0:00:45.715 *********** 2025-07-04 18:03:47.935922 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:47.937014 | orchestrator | 2025-07-04 18:03:47.937809 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-04 18:03:47.939286 | orchestrator | Friday 04 July 2025 18:03:47 +0000 (0:00:00.133) 0:00:45.849 *********** 2025-07-04 18:03:48.080439 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:48.081074 | orchestrator | 2025-07-04 18:03:48.082671 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-04 18:03:48.084281 | orchestrator | Friday 04 July 2025 18:03:48 +0000 (0:00:00.142) 0:00:45.992 *********** 2025-07-04 18:03:48.232480 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:48.232664 | orchestrator | 2025-07-04 18:03:48.232685 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-04 18:03:48.233067 | orchestrator | Friday 04 July 2025 18:03:48 +0000 (0:00:00.152) 0:00:46.144 *********** 2025-07-04 18:03:48.378883 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:48.378989 | orchestrator | 2025-07-04 18:03:48.380919 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-04 18:03:48.382464 | orchestrator | Friday 04 July 2025 18:03:48 +0000 (0:00:00.147) 0:00:46.291 *********** 2025-07-04 18:03:48.534436 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:48.534626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:48.536593 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:48.537455 | orchestrator | 2025-07-04 18:03:48.538581 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-04 18:03:48.539665 | orchestrator | Friday 04 July 2025 18:03:48 +0000 (0:00:00.154) 0:00:46.446 *********** 2025-07-04 18:03:48.690495 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:48.690925 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:48.693258 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:48.693959 | orchestrator | 2025-07-04 18:03:48.695643 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-04 18:03:48.697192 | orchestrator | Friday 04 July 2025 18:03:48 +0000 (0:00:00.156) 0:00:46.602 *********** 2025-07-04 18:03:48.845748 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:48.845938 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:48.847172 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:48.848273 | orchestrator | 2025-07-04 18:03:48.849063 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-04 18:03:48.849773 | orchestrator | Friday 04 July 2025 18:03:48 +0000 (0:00:00.154) 0:00:46.757 *********** 2025-07-04 18:03:49.214274 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:49.214378 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:49.214655 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:49.215206 | orchestrator | 2025-07-04 18:03:49.216063 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-04 18:03:49.216439 | orchestrator | Friday 04 July 2025 18:03:49 +0000 (0:00:00.365) 0:00:47.122 *********** 2025-07-04 18:03:49.360590 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:49.360749 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:49.361871 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:49.362915 | orchestrator | 2025-07-04 18:03:49.364037 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-04 18:03:49.364733 | orchestrator | Friday 04 July 2025 18:03:49 +0000 (0:00:00.150) 0:00:47.273 *********** 2025-07-04 18:03:49.545820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:49.546200 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:49.547448 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:49.548937 | orchestrator | 2025-07-04 18:03:49.552678 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-04 18:03:49.554174 | orchestrator | Friday 04 July 2025 18:03:49 +0000 (0:00:00.183) 0:00:47.457 *********** 2025-07-04 18:03:49.718665 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:49.718931 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:49.720666 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:49.722378 | orchestrator | 2025-07-04 18:03:49.723088 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-04 18:03:49.724708 | orchestrator | Friday 04 July 2025 18:03:49 +0000 (0:00:00.173) 0:00:47.631 *********** 2025-07-04 18:03:49.879694 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:49.880205 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:49.881076 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:49.882695 | orchestrator | 2025-07-04 18:03:49.882736 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-04 18:03:49.883770 | orchestrator | Friday 04 July 2025 18:03:49 +0000 (0:00:00.159) 0:00:47.790 *********** 2025-07-04 18:03:50.372480 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:03:50.372687 | orchestrator | 2025-07-04 18:03:50.373602 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-04 18:03:50.374240 | orchestrator | Friday 04 July 2025 18:03:50 +0000 (0:00:00.494) 0:00:48.285 *********** 2025-07-04 18:03:50.883239 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:03:50.883421 | orchestrator | 2025-07-04 18:03:50.884418 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-04 18:03:50.885373 | orchestrator | Friday 04 July 2025 18:03:50 +0000 (0:00:00.509) 0:00:48.794 *********** 2025-07-04 18:03:51.038690 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:03:51.039721 | orchestrator | 2025-07-04 18:03:51.040717 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-04 18:03:51.041595 | orchestrator | Friday 04 July 2025 18:03:51 +0000 (0:00:00.157) 0:00:48.952 *********** 2025-07-04 18:03:51.235413 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'vg_name': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'}) 2025-07-04 18:03:51.237091 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'vg_name': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'}) 2025-07-04 18:03:51.237128 | orchestrator | 2025-07-04 18:03:51.238251 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-04 18:03:51.238967 | orchestrator | Friday 04 July 2025 18:03:51 +0000 (0:00:00.194) 0:00:49.146 *********** 2025-07-04 18:03:51.390513 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:51.392051 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:51.395068 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:51.396083 | orchestrator | 2025-07-04 18:03:51.396370 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-04 18:03:51.397014 | orchestrator | Friday 04 July 2025 18:03:51 +0000 (0:00:00.155) 0:00:49.302 *********** 2025-07-04 18:03:51.536258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:51.536784 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:51.539496 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:51.539534 | orchestrator | 2025-07-04 18:03:51.540656 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-04 18:03:51.541408 | orchestrator | Friday 04 July 2025 18:03:51 +0000 (0:00:00.146) 0:00:49.449 *********** 2025-07-04 18:03:51.719275 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'})  2025-07-04 18:03:51.720254 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'})  2025-07-04 18:03:51.721735 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:03:51.724120 | orchestrator | 2025-07-04 18:03:51.724184 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-04 18:03:51.724198 | orchestrator | Friday 04 July 2025 18:03:51 +0000 (0:00:00.183) 0:00:49.632 *********** 2025-07-04 18:03:52.205871 | orchestrator | ok: [testbed-node-4] => { 2025-07-04 18:03:52.206837 | orchestrator |  "lvm_report": { 2025-07-04 18:03:52.208002 | orchestrator |  "lv": [ 2025-07-04 18:03:52.209256 | orchestrator |  { 2025-07-04 18:03:52.209962 | orchestrator |  "lv_name": "osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc", 2025-07-04 18:03:52.211085 | orchestrator |  "vg_name": "ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc" 2025-07-04 18:03:52.212020 | orchestrator |  }, 2025-07-04 18:03:52.212991 | orchestrator |  { 2025-07-04 18:03:52.213672 | orchestrator |  "lv_name": "osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43", 2025-07-04 18:03:52.214164 | orchestrator |  "vg_name": "ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43" 2025-07-04 18:03:52.214801 | orchestrator |  } 2025-07-04 18:03:52.215196 | orchestrator |  ], 2025-07-04 18:03:52.216506 | orchestrator |  "pv": [ 2025-07-04 18:03:52.216664 | orchestrator |  { 2025-07-04 18:03:52.217600 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-04 18:03:52.217949 | orchestrator |  "vg_name": "ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc" 2025-07-04 18:03:52.218595 | orchestrator |  }, 2025-07-04 18:03:52.219156 | orchestrator |  { 2025-07-04 18:03:52.219739 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-04 18:03:52.220174 | orchestrator |  "vg_name": "ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43" 2025-07-04 18:03:52.220283 | orchestrator |  } 2025-07-04 18:03:52.220863 | orchestrator |  ] 2025-07-04 18:03:52.221166 | orchestrator |  } 2025-07-04 18:03:52.221502 | orchestrator | } 2025-07-04 18:03:52.221734 | orchestrator | 2025-07-04 18:03:52.222344 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-04 18:03:52.222771 | orchestrator | 2025-07-04 18:03:52.223089 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-04 18:03:52.223610 | orchestrator | Friday 04 July 2025 18:03:52 +0000 (0:00:00.484) 0:00:50.117 *********** 2025-07-04 18:03:52.452435 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-04 18:03:52.452598 | orchestrator | 2025-07-04 18:03:52.453921 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-04 18:03:52.454215 | orchestrator | Friday 04 July 2025 18:03:52 +0000 (0:00:00.247) 0:00:50.365 *********** 2025-07-04 18:03:52.677945 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:03:52.678822 | orchestrator | 2025-07-04 18:03:52.679719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:52.680445 | orchestrator | Friday 04 July 2025 18:03:52 +0000 (0:00:00.225) 0:00:50.591 *********** 2025-07-04 18:03:53.092371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-04 18:03:53.093518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-04 18:03:53.094233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-04 18:03:53.095347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-04 18:03:53.096072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-04 18:03:53.096966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-04 18:03:53.097627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-04 18:03:53.098232 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-04 18:03:53.098911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-04 18:03:53.099591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-04 18:03:53.100274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-04 18:03:53.100795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-04 18:03:53.101237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-04 18:03:53.101766 | orchestrator | 2025-07-04 18:03:53.102115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:53.102630 | orchestrator | Friday 04 July 2025 18:03:53 +0000 (0:00:00.410) 0:00:51.001 *********** 2025-07-04 18:03:53.329950 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:53.330866 | orchestrator | 2025-07-04 18:03:53.331878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:53.332551 | orchestrator | Friday 04 July 2025 18:03:53 +0000 (0:00:00.241) 0:00:51.242 *********** 2025-07-04 18:03:53.533456 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:53.534118 | orchestrator | 2025-07-04 18:03:53.535387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:53.536244 | orchestrator | Friday 04 July 2025 18:03:53 +0000 (0:00:00.204) 0:00:51.446 *********** 2025-07-04 18:03:53.762799 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:53.763226 | orchestrator | 2025-07-04 18:03:53.763828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:53.764525 | orchestrator | Friday 04 July 2025 18:03:53 +0000 (0:00:00.229) 0:00:51.676 *********** 2025-07-04 18:03:53.959359 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:53.959534 | orchestrator | 2025-07-04 18:03:53.960859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:53.961483 | orchestrator | Friday 04 July 2025 18:03:53 +0000 (0:00:00.194) 0:00:51.871 *********** 2025-07-04 18:03:54.154461 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:54.154679 | orchestrator | 2025-07-04 18:03:54.156306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:54.157205 | orchestrator | Friday 04 July 2025 18:03:54 +0000 (0:00:00.196) 0:00:52.067 *********** 2025-07-04 18:03:54.793729 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:54.794209 | orchestrator | 2025-07-04 18:03:54.795730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:54.796815 | orchestrator | Friday 04 July 2025 18:03:54 +0000 (0:00:00.639) 0:00:52.706 *********** 2025-07-04 18:03:55.006345 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:55.007814 | orchestrator | 2025-07-04 18:03:55.008854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:55.009766 | orchestrator | Friday 04 July 2025 18:03:54 +0000 (0:00:00.211) 0:00:52.918 *********** 2025-07-04 18:03:55.229259 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:55.229394 | orchestrator | 2025-07-04 18:03:55.231398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:55.232327 | orchestrator | Friday 04 July 2025 18:03:55 +0000 (0:00:00.223) 0:00:53.141 *********** 2025-07-04 18:03:55.673566 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958) 2025-07-04 18:03:55.674239 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958) 2025-07-04 18:03:55.675236 | orchestrator | 2025-07-04 18:03:55.676070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:55.677411 | orchestrator | Friday 04 July 2025 18:03:55 +0000 (0:00:00.445) 0:00:53.586 *********** 2025-07-04 18:03:56.094317 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cc9ae976-88cb-4b21-9449-d8985ff12d4f) 2025-07-04 18:03:56.095388 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cc9ae976-88cb-4b21-9449-d8985ff12d4f) 2025-07-04 18:03:56.096672 | orchestrator | 2025-07-04 18:03:56.097561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:56.098068 | orchestrator | Friday 04 July 2025 18:03:56 +0000 (0:00:00.418) 0:00:54.005 *********** 2025-07-04 18:03:56.510326 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d957e37b-6f48-487c-9682-d56dbc604f5a) 2025-07-04 18:03:56.510417 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d957e37b-6f48-487c-9682-d56dbc604f5a) 2025-07-04 18:03:56.511324 | orchestrator | 2025-07-04 18:03:56.512862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:56.512913 | orchestrator | Friday 04 July 2025 18:03:56 +0000 (0:00:00.417) 0:00:54.423 *********** 2025-07-04 18:03:56.948460 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_36831ba3-00a3-40d1-8c8d-d5688ce5b92e) 2025-07-04 18:03:56.949074 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_36831ba3-00a3-40d1-8c8d-d5688ce5b92e) 2025-07-04 18:03:56.949881 | orchestrator | 2025-07-04 18:03:56.951001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-04 18:03:56.951548 | orchestrator | Friday 04 July 2025 18:03:56 +0000 (0:00:00.438) 0:00:54.861 *********** 2025-07-04 18:03:57.303561 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-04 18:03:57.305035 | orchestrator | 2025-07-04 18:03:57.306626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:57.307843 | orchestrator | Friday 04 July 2025 18:03:57 +0000 (0:00:00.352) 0:00:55.214 *********** 2025-07-04 18:03:57.712314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-04 18:03:57.713160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-04 18:03:57.714507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-04 18:03:57.715609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-04 18:03:57.716512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-04 18:03:57.717198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-04 18:03:57.718044 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-04 18:03:57.718670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-04 18:03:57.719023 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-04 18:03:57.719790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-04 18:03:57.720344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-04 18:03:57.720760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-04 18:03:57.721379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-04 18:03:57.721855 | orchestrator | 2025-07-04 18:03:57.722242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:57.722726 | orchestrator | Friday 04 July 2025 18:03:57 +0000 (0:00:00.410) 0:00:55.625 *********** 2025-07-04 18:03:57.906729 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:57.906913 | orchestrator | 2025-07-04 18:03:57.907059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:57.907940 | orchestrator | Friday 04 July 2025 18:03:57 +0000 (0:00:00.192) 0:00:55.818 *********** 2025-07-04 18:03:58.129527 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:58.130229 | orchestrator | 2025-07-04 18:03:58.131225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:58.132324 | orchestrator | Friday 04 July 2025 18:03:58 +0000 (0:00:00.225) 0:00:56.043 *********** 2025-07-04 18:03:58.790616 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:58.790878 | orchestrator | 2025-07-04 18:03:58.792087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:58.792940 | orchestrator | Friday 04 July 2025 18:03:58 +0000 (0:00:00.660) 0:00:56.703 *********** 2025-07-04 18:03:59.000634 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:59.000803 | orchestrator | 2025-07-04 18:03:59.001232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:59.001519 | orchestrator | Friday 04 July 2025 18:03:58 +0000 (0:00:00.210) 0:00:56.914 *********** 2025-07-04 18:03:59.200463 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:59.201007 | orchestrator | 2025-07-04 18:03:59.202095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:59.204238 | orchestrator | Friday 04 July 2025 18:03:59 +0000 (0:00:00.198) 0:00:57.113 *********** 2025-07-04 18:03:59.403656 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:59.409729 | orchestrator | 2025-07-04 18:03:59.410305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:59.410997 | orchestrator | Friday 04 July 2025 18:03:59 +0000 (0:00:00.203) 0:00:57.317 *********** 2025-07-04 18:03:59.612032 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:59.612186 | orchestrator | 2025-07-04 18:03:59.613424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:59.614764 | orchestrator | Friday 04 July 2025 18:03:59 +0000 (0:00:00.208) 0:00:57.525 *********** 2025-07-04 18:03:59.825196 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:03:59.825680 | orchestrator | 2025-07-04 18:03:59.826165 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:03:59.827168 | orchestrator | Friday 04 July 2025 18:03:59 +0000 (0:00:00.212) 0:00:57.738 *********** 2025-07-04 18:04:00.534503 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-04 18:04:00.536882 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-04 18:04:00.537745 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-04 18:04:00.537791 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-04 18:04:00.538851 | orchestrator | 2025-07-04 18:04:00.539769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:04:00.540717 | orchestrator | Friday 04 July 2025 18:04:00 +0000 (0:00:00.707) 0:00:58.446 *********** 2025-07-04 18:04:00.816826 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:00.816995 | orchestrator | 2025-07-04 18:04:00.818374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:04:00.819359 | orchestrator | Friday 04 July 2025 18:04:00 +0000 (0:00:00.282) 0:00:58.729 *********** 2025-07-04 18:04:01.014280 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:01.014452 | orchestrator | 2025-07-04 18:04:01.015724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:04:01.017011 | orchestrator | Friday 04 July 2025 18:04:01 +0000 (0:00:00.198) 0:00:58.927 *********** 2025-07-04 18:04:01.221239 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:01.222069 | orchestrator | 2025-07-04 18:04:01.222586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-04 18:04:01.223546 | orchestrator | Friday 04 July 2025 18:04:01 +0000 (0:00:00.207) 0:00:59.135 *********** 2025-07-04 18:04:01.415190 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:01.416001 | orchestrator | 2025-07-04 18:04:01.416938 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-04 18:04:01.418062 | orchestrator | Friday 04 July 2025 18:04:01 +0000 (0:00:00.192) 0:00:59.327 *********** 2025-07-04 18:04:01.775357 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:01.776377 | orchestrator | 2025-07-04 18:04:01.777411 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-04 18:04:01.778931 | orchestrator | Friday 04 July 2025 18:04:01 +0000 (0:00:00.361) 0:00:59.689 *********** 2025-07-04 18:04:01.958726 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'}}) 2025-07-04 18:04:01.958831 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '38a85088-e19d-56c7-801b-f45e1c084bd2'}}) 2025-07-04 18:04:01.960279 | orchestrator | 2025-07-04 18:04:01.961723 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-04 18:04:01.962985 | orchestrator | Friday 04 July 2025 18:04:01 +0000 (0:00:00.181) 0:00:59.870 *********** 2025-07-04 18:04:03.772591 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'}) 2025-07-04 18:04:03.772766 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'}) 2025-07-04 18:04:03.775872 | orchestrator | 2025-07-04 18:04:03.776800 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-04 18:04:03.777518 | orchestrator | Friday 04 July 2025 18:04:03 +0000 (0:00:01.814) 0:01:01.685 *********** 2025-07-04 18:04:03.960217 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:03.961100 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:03.962355 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:03.964182 | orchestrator | 2025-07-04 18:04:03.964213 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-04 18:04:03.964267 | orchestrator | Friday 04 July 2025 18:04:03 +0000 (0:00:00.188) 0:01:01.873 *********** 2025-07-04 18:04:05.234425 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'}) 2025-07-04 18:04:05.234571 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'}) 2025-07-04 18:04:05.237931 | orchestrator | 2025-07-04 18:04:05.237986 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-04 18:04:05.237993 | orchestrator | Friday 04 July 2025 18:04:05 +0000 (0:00:01.272) 0:01:03.145 *********** 2025-07-04 18:04:05.385271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:05.385378 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:05.385766 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:05.385839 | orchestrator | 2025-07-04 18:04:05.386321 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-04 18:04:05.387700 | orchestrator | Friday 04 July 2025 18:04:05 +0000 (0:00:00.151) 0:01:03.297 *********** 2025-07-04 18:04:05.543762 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:05.543891 | orchestrator | 2025-07-04 18:04:05.545698 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-04 18:04:05.546321 | orchestrator | Friday 04 July 2025 18:04:05 +0000 (0:00:00.159) 0:01:03.456 *********** 2025-07-04 18:04:05.697502 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:05.697685 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:05.699013 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:05.700904 | orchestrator | 2025-07-04 18:04:05.700931 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-04 18:04:05.702002 | orchestrator | Friday 04 July 2025 18:04:05 +0000 (0:00:00.153) 0:01:03.610 *********** 2025-07-04 18:04:05.863740 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:05.865429 | orchestrator | 2025-07-04 18:04:05.867714 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-04 18:04:05.869008 | orchestrator | Friday 04 July 2025 18:04:05 +0000 (0:00:00.167) 0:01:03.777 *********** 2025-07-04 18:04:06.022438 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:06.023542 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:06.024383 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:06.025317 | orchestrator | 2025-07-04 18:04:06.025559 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-04 18:04:06.026652 | orchestrator | Friday 04 July 2025 18:04:06 +0000 (0:00:00.157) 0:01:03.935 *********** 2025-07-04 18:04:06.171815 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:06.172927 | orchestrator | 2025-07-04 18:04:06.173384 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-04 18:04:06.174324 | orchestrator | Friday 04 July 2025 18:04:06 +0000 (0:00:00.148) 0:01:04.083 *********** 2025-07-04 18:04:06.331604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:06.333303 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:06.333617 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:06.336393 | orchestrator | 2025-07-04 18:04:06.337022 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-04 18:04:06.337799 | orchestrator | Friday 04 July 2025 18:04:06 +0000 (0:00:00.160) 0:01:04.244 *********** 2025-07-04 18:04:06.512599 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:04:06.513853 | orchestrator | 2025-07-04 18:04:06.514991 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-04 18:04:06.516264 | orchestrator | Friday 04 July 2025 18:04:06 +0000 (0:00:00.180) 0:01:04.425 *********** 2025-07-04 18:04:06.934326 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:06.935394 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:06.936866 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:06.938550 | orchestrator | 2025-07-04 18:04:06.939066 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-04 18:04:06.940976 | orchestrator | Friday 04 July 2025 18:04:06 +0000 (0:00:00.420) 0:01:04.845 *********** 2025-07-04 18:04:07.093721 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:07.093956 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:07.095741 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:07.096963 | orchestrator | 2025-07-04 18:04:07.097746 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-04 18:04:07.098440 | orchestrator | Friday 04 July 2025 18:04:07 +0000 (0:00:00.160) 0:01:05.006 *********** 2025-07-04 18:04:07.248441 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:07.249964 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:07.251578 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:07.252169 | orchestrator | 2025-07-04 18:04:07.253281 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-04 18:04:07.254257 | orchestrator | Friday 04 July 2025 18:04:07 +0000 (0:00:00.155) 0:01:05.161 *********** 2025-07-04 18:04:07.399300 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:07.401424 | orchestrator | 2025-07-04 18:04:07.402772 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-04 18:04:07.404221 | orchestrator | Friday 04 July 2025 18:04:07 +0000 (0:00:00.149) 0:01:05.311 *********** 2025-07-04 18:04:07.548533 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:07.550902 | orchestrator | 2025-07-04 18:04:07.552193 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-04 18:04:07.553244 | orchestrator | Friday 04 July 2025 18:04:07 +0000 (0:00:00.149) 0:01:05.461 *********** 2025-07-04 18:04:07.707576 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:07.709261 | orchestrator | 2025-07-04 18:04:07.709877 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-04 18:04:07.710943 | orchestrator | Friday 04 July 2025 18:04:07 +0000 (0:00:00.158) 0:01:05.619 *********** 2025-07-04 18:04:07.843290 | orchestrator | ok: [testbed-node-5] => { 2025-07-04 18:04:07.843491 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-04 18:04:07.845248 | orchestrator | } 2025-07-04 18:04:07.846756 | orchestrator | 2025-07-04 18:04:07.847197 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-04 18:04:07.848172 | orchestrator | Friday 04 July 2025 18:04:07 +0000 (0:00:00.135) 0:01:05.755 *********** 2025-07-04 18:04:07.997925 | orchestrator | ok: [testbed-node-5] => { 2025-07-04 18:04:07.999943 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-04 18:04:08.001231 | orchestrator | } 2025-07-04 18:04:08.002951 | orchestrator | 2025-07-04 18:04:08.005792 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-04 18:04:08.006730 | orchestrator | Friday 04 July 2025 18:04:07 +0000 (0:00:00.153) 0:01:05.909 *********** 2025-07-04 18:04:08.154185 | orchestrator | ok: [testbed-node-5] => { 2025-07-04 18:04:08.154943 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-04 18:04:08.156172 | orchestrator | } 2025-07-04 18:04:08.157622 | orchestrator | 2025-07-04 18:04:08.158424 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-04 18:04:08.159632 | orchestrator | Friday 04 July 2025 18:04:08 +0000 (0:00:00.157) 0:01:06.067 *********** 2025-07-04 18:04:08.661328 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:04:08.662173 | orchestrator | 2025-07-04 18:04:08.662777 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-04 18:04:08.663593 | orchestrator | Friday 04 July 2025 18:04:08 +0000 (0:00:00.507) 0:01:06.575 *********** 2025-07-04 18:04:09.180089 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:04:09.181161 | orchestrator | 2025-07-04 18:04:09.181943 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-04 18:04:09.183739 | orchestrator | Friday 04 July 2025 18:04:09 +0000 (0:00:00.517) 0:01:07.092 *********** 2025-07-04 18:04:09.707646 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:04:09.708016 | orchestrator | 2025-07-04 18:04:09.708332 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-04 18:04:09.710240 | orchestrator | Friday 04 July 2025 18:04:09 +0000 (0:00:00.526) 0:01:07.618 *********** 2025-07-04 18:04:10.057526 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:04:10.057694 | orchestrator | 2025-07-04 18:04:10.058893 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-04 18:04:10.060021 | orchestrator | Friday 04 July 2025 18:04:10 +0000 (0:00:00.350) 0:01:07.969 *********** 2025-07-04 18:04:10.171293 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:10.172671 | orchestrator | 2025-07-04 18:04:10.173408 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-04 18:04:10.174615 | orchestrator | Friday 04 July 2025 18:04:10 +0000 (0:00:00.114) 0:01:08.084 *********** 2025-07-04 18:04:10.284718 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:10.286183 | orchestrator | 2025-07-04 18:04:10.287509 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-04 18:04:10.288784 | orchestrator | Friday 04 July 2025 18:04:10 +0000 (0:00:00.113) 0:01:08.197 *********** 2025-07-04 18:04:10.431303 | orchestrator | ok: [testbed-node-5] => { 2025-07-04 18:04:10.432352 | orchestrator |  "vgs_report": { 2025-07-04 18:04:10.434068 | orchestrator |  "vg": [] 2025-07-04 18:04:10.435334 | orchestrator |  } 2025-07-04 18:04:10.435881 | orchestrator | } 2025-07-04 18:04:10.436739 | orchestrator | 2025-07-04 18:04:10.437375 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-04 18:04:10.437955 | orchestrator | Friday 04 July 2025 18:04:10 +0000 (0:00:00.146) 0:01:08.344 *********** 2025-07-04 18:04:10.573793 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:10.575255 | orchestrator | 2025-07-04 18:04:10.576728 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-04 18:04:10.577925 | orchestrator | Friday 04 July 2025 18:04:10 +0000 (0:00:00.142) 0:01:08.487 *********** 2025-07-04 18:04:10.721737 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:10.722263 | orchestrator | 2025-07-04 18:04:10.723630 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-04 18:04:10.725176 | orchestrator | Friday 04 July 2025 18:04:10 +0000 (0:00:00.147) 0:01:08.634 *********** 2025-07-04 18:04:10.870241 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:10.870355 | orchestrator | 2025-07-04 18:04:10.870473 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-04 18:04:10.871010 | orchestrator | Friday 04 July 2025 18:04:10 +0000 (0:00:00.146) 0:01:08.780 *********** 2025-07-04 18:04:11.005054 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:11.006505 | orchestrator | 2025-07-04 18:04:11.007630 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-04 18:04:11.008330 | orchestrator | Friday 04 July 2025 18:04:10 +0000 (0:00:00.135) 0:01:08.916 *********** 2025-07-04 18:04:11.146083 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:11.146312 | orchestrator | 2025-07-04 18:04:11.147013 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-04 18:04:11.149378 | orchestrator | Friday 04 July 2025 18:04:11 +0000 (0:00:00.142) 0:01:09.058 *********** 2025-07-04 18:04:11.288634 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:11.288889 | orchestrator | 2025-07-04 18:04:11.290190 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-04 18:04:11.290789 | orchestrator | Friday 04 July 2025 18:04:11 +0000 (0:00:00.142) 0:01:09.200 *********** 2025-07-04 18:04:11.436109 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:11.437226 | orchestrator | 2025-07-04 18:04:11.437933 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-04 18:04:11.439013 | orchestrator | Friday 04 July 2025 18:04:11 +0000 (0:00:00.148) 0:01:09.349 *********** 2025-07-04 18:04:11.582798 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:11.583110 | orchestrator | 2025-07-04 18:04:11.584458 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-04 18:04:11.585573 | orchestrator | Friday 04 July 2025 18:04:11 +0000 (0:00:00.145) 0:01:09.495 *********** 2025-07-04 18:04:11.938417 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:11.939958 | orchestrator | 2025-07-04 18:04:11.941175 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-04 18:04:11.941902 | orchestrator | Friday 04 July 2025 18:04:11 +0000 (0:00:00.356) 0:01:09.852 *********** 2025-07-04 18:04:12.079077 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:12.080773 | orchestrator | 2025-07-04 18:04:12.081255 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-04 18:04:12.082398 | orchestrator | Friday 04 July 2025 18:04:12 +0000 (0:00:00.138) 0:01:09.990 *********** 2025-07-04 18:04:12.226650 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:12.228502 | orchestrator | 2025-07-04 18:04:12.229201 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-04 18:04:12.230481 | orchestrator | Friday 04 July 2025 18:04:12 +0000 (0:00:00.149) 0:01:10.140 *********** 2025-07-04 18:04:12.378091 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:12.379697 | orchestrator | 2025-07-04 18:04:12.380330 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-04 18:04:12.381260 | orchestrator | Friday 04 July 2025 18:04:12 +0000 (0:00:00.150) 0:01:10.290 *********** 2025-07-04 18:04:12.526145 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:12.526372 | orchestrator | 2025-07-04 18:04:12.527270 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-04 18:04:12.528339 | orchestrator | Friday 04 July 2025 18:04:12 +0000 (0:00:00.148) 0:01:10.438 *********** 2025-07-04 18:04:12.672979 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:12.673796 | orchestrator | 2025-07-04 18:04:12.674509 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-04 18:04:12.675392 | orchestrator | Friday 04 July 2025 18:04:12 +0000 (0:00:00.145) 0:01:10.584 *********** 2025-07-04 18:04:12.830221 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:12.830899 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:12.831737 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:12.832467 | orchestrator | 2025-07-04 18:04:12.834360 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-04 18:04:12.834384 | orchestrator | Friday 04 July 2025 18:04:12 +0000 (0:00:00.159) 0:01:10.744 *********** 2025-07-04 18:04:12.981261 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:12.981836 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:12.982609 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:12.983236 | orchestrator | 2025-07-04 18:04:12.984887 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-04 18:04:12.985000 | orchestrator | Friday 04 July 2025 18:04:12 +0000 (0:00:00.150) 0:01:10.894 *********** 2025-07-04 18:04:13.146892 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:13.147092 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:13.147839 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:13.148898 | orchestrator | 2025-07-04 18:04:13.149961 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-04 18:04:13.152026 | orchestrator | Friday 04 July 2025 18:04:13 +0000 (0:00:00.165) 0:01:11.059 *********** 2025-07-04 18:04:13.289857 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:13.290955 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:13.291922 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:13.292676 | orchestrator | 2025-07-04 18:04:13.294651 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-04 18:04:13.295977 | orchestrator | Friday 04 July 2025 18:04:13 +0000 (0:00:00.144) 0:01:11.203 *********** 2025-07-04 18:04:13.457035 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:13.458189 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:13.459955 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:13.460644 | orchestrator | 2025-07-04 18:04:13.462420 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-04 18:04:13.462721 | orchestrator | Friday 04 July 2025 18:04:13 +0000 (0:00:00.166) 0:01:11.369 *********** 2025-07-04 18:04:13.603498 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:13.604135 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:13.605693 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:13.606900 | orchestrator | 2025-07-04 18:04:13.607998 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-04 18:04:13.608873 | orchestrator | Friday 04 July 2025 18:04:13 +0000 (0:00:00.146) 0:01:11.516 *********** 2025-07-04 18:04:13.978252 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:13.981446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:13.981544 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:13.983232 | orchestrator | 2025-07-04 18:04:13.984381 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-04 18:04:13.984928 | orchestrator | Friday 04 July 2025 18:04:13 +0000 (0:00:00.374) 0:01:11.891 *********** 2025-07-04 18:04:14.139637 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:14.139715 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:14.141334 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:14.142396 | orchestrator | 2025-07-04 18:04:14.143308 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-04 18:04:14.144190 | orchestrator | Friday 04 July 2025 18:04:14 +0000 (0:00:00.157) 0:01:12.049 *********** 2025-07-04 18:04:14.692602 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:04:14.692914 | orchestrator | 2025-07-04 18:04:14.694953 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-04 18:04:14.695004 | orchestrator | Friday 04 July 2025 18:04:14 +0000 (0:00:00.556) 0:01:12.605 *********** 2025-07-04 18:04:15.228622 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:04:15.229298 | orchestrator | 2025-07-04 18:04:15.231167 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-04 18:04:15.231233 | orchestrator | Friday 04 July 2025 18:04:15 +0000 (0:00:00.534) 0:01:13.139 *********** 2025-07-04 18:04:15.374989 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:04:15.375916 | orchestrator | 2025-07-04 18:04:15.376683 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-04 18:04:15.377687 | orchestrator | Friday 04 July 2025 18:04:15 +0000 (0:00:00.148) 0:01:13.288 *********** 2025-07-04 18:04:15.551615 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'vg_name': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'}) 2025-07-04 18:04:15.551854 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'vg_name': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'}) 2025-07-04 18:04:15.552850 | orchestrator | 2025-07-04 18:04:15.554933 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-04 18:04:15.554948 | orchestrator | Friday 04 July 2025 18:04:15 +0000 (0:00:00.175) 0:01:13.463 *********** 2025-07-04 18:04:15.709692 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:15.710343 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:15.712275 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:15.712301 | orchestrator | 2025-07-04 18:04:15.713046 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-04 18:04:15.713692 | orchestrator | Friday 04 July 2025 18:04:15 +0000 (0:00:00.157) 0:01:13.621 *********** 2025-07-04 18:04:15.863159 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:15.865790 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:15.866253 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:15.868576 | orchestrator | 2025-07-04 18:04:15.868731 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-04 18:04:15.869547 | orchestrator | Friday 04 July 2025 18:04:15 +0000 (0:00:00.155) 0:01:13.776 *********** 2025-07-04 18:04:16.017049 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'})  2025-07-04 18:04:16.017297 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'})  2025-07-04 18:04:16.017941 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:16.019019 | orchestrator | 2025-07-04 18:04:16.020416 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-04 18:04:16.021412 | orchestrator | Friday 04 July 2025 18:04:16 +0000 (0:00:00.153) 0:01:13.930 *********** 2025-07-04 18:04:16.161653 | orchestrator | ok: [testbed-node-5] => { 2025-07-04 18:04:16.161830 | orchestrator |  "lvm_report": { 2025-07-04 18:04:16.163082 | orchestrator |  "lv": [ 2025-07-04 18:04:16.164138 | orchestrator |  { 2025-07-04 18:04:16.165433 | orchestrator |  "lv_name": "osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2", 2025-07-04 18:04:16.166291 | orchestrator |  "vg_name": "ceph-38a85088-e19d-56c7-801b-f45e1c084bd2" 2025-07-04 18:04:16.167319 | orchestrator |  }, 2025-07-04 18:04:16.168054 | orchestrator |  { 2025-07-04 18:04:16.169067 | orchestrator |  "lv_name": "osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6", 2025-07-04 18:04:16.170403 | orchestrator |  "vg_name": "ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6" 2025-07-04 18:04:16.170579 | orchestrator |  } 2025-07-04 18:04:16.171390 | orchestrator |  ], 2025-07-04 18:04:16.172268 | orchestrator |  "pv": [ 2025-07-04 18:04:16.173233 | orchestrator |  { 2025-07-04 18:04:16.173914 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-04 18:04:16.174479 | orchestrator |  "vg_name": "ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6" 2025-07-04 18:04:16.175320 | orchestrator |  }, 2025-07-04 18:04:16.175863 | orchestrator |  { 2025-07-04 18:04:16.176430 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-04 18:04:16.177270 | orchestrator |  "vg_name": "ceph-38a85088-e19d-56c7-801b-f45e1c084bd2" 2025-07-04 18:04:16.177673 | orchestrator |  } 2025-07-04 18:04:16.178401 | orchestrator |  ] 2025-07-04 18:04:16.179028 | orchestrator |  } 2025-07-04 18:04:16.179654 | orchestrator | } 2025-07-04 18:04:16.180137 | orchestrator | 2025-07-04 18:04:16.180745 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:04:16.181335 | orchestrator | 2025-07-04 18:04:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 18:04:16.181827 | orchestrator | 2025-07-04 18:04:16 | INFO  | Please wait and do not abort execution. 2025-07-04 18:04:16.182643 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-04 18:04:16.183378 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-04 18:04:16.183781 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-04 18:04:16.184628 | orchestrator | 2025-07-04 18:04:16.184754 | orchestrator | 2025-07-04 18:04:16.185691 | orchestrator | 2025-07-04 18:04:16.185789 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:04:16.186319 | orchestrator | Friday 04 July 2025 18:04:16 +0000 (0:00:00.142) 0:01:14.072 *********** 2025-07-04 18:04:16.186925 | orchestrator | =============================================================================== 2025-07-04 18:04:16.187410 | orchestrator | Create block VGs -------------------------------------------------------- 5.67s 2025-07-04 18:04:16.187623 | orchestrator | Create block LVs -------------------------------------------------------- 3.95s 2025-07-04 18:04:16.188322 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.88s 2025-07-04 18:04:16.188722 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.58s 2025-07-04 18:04:16.189231 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.57s 2025-07-04 18:04:16.189957 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2025-07-04 18:04:16.190132 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.54s 2025-07-04 18:04:16.191027 | orchestrator | Add known partitions to the list of available block devices ------------- 1.46s 2025-07-04 18:04:16.191205 | orchestrator | Add known links to the list of available block devices ------------------ 1.25s 2025-07-04 18:04:16.191716 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2025-07-04 18:04:16.192245 | orchestrator | Print LVM report data --------------------------------------------------- 0.99s 2025-07-04 18:04:16.192550 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2025-07-04 18:04:16.193270 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.76s 2025-07-04 18:04:16.193420 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.75s 2025-07-04 18:04:16.195398 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.74s 2025-07-04 18:04:16.196240 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-07-04 18:04:16.197167 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.71s 2025-07-04 18:04:16.198905 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-07-04 18:04:16.199633 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.70s 2025-07-04 18:04:16.200405 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2025-07-04 18:04:18.591315 | orchestrator | Registering Redlock._acquired_script 2025-07-04 18:04:18.591425 | orchestrator | Registering Redlock._extend_script 2025-07-04 18:04:18.591441 | orchestrator | Registering Redlock._release_script 2025-07-04 18:04:18.655266 | orchestrator | 2025-07-04 18:04:18 | INFO  | Task bc8bd624-9286-403a-9c42-0f677af5b962 (facts) was prepared for execution. 2025-07-04 18:04:18.655361 | orchestrator | 2025-07-04 18:04:18 | INFO  | It takes a moment until task bc8bd624-9286-403a-9c42-0f677af5b962 (facts) has been started and output is visible here. 2025-07-04 18:04:22.871682 | orchestrator | 2025-07-04 18:04:22.872754 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-04 18:04:22.873530 | orchestrator | 2025-07-04 18:04:22.874443 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-04 18:04:22.875999 | orchestrator | Friday 04 July 2025 18:04:22 +0000 (0:00:00.297) 0:00:00.297 *********** 2025-07-04 18:04:23.975019 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:04:23.976000 | orchestrator | ok: [testbed-manager] 2025-07-04 18:04:23.978073 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:04:23.978580 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:04:23.979445 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:04:23.980406 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:04:23.980855 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:04:23.981849 | orchestrator | 2025-07-04 18:04:23.982450 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-04 18:04:23.983141 | orchestrator | Friday 04 July 2025 18:04:23 +0000 (0:00:01.103) 0:00:01.400 *********** 2025-07-04 18:04:24.165611 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:04:24.247685 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:04:24.332581 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:04:24.413652 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:04:24.494996 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:04:25.293740 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:04:25.295008 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:25.295247 | orchestrator | 2025-07-04 18:04:25.298152 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-04 18:04:25.298193 | orchestrator | 2025-07-04 18:04:25.298206 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-04 18:04:25.298217 | orchestrator | Friday 04 July 2025 18:04:25 +0000 (0:00:01.322) 0:00:02.723 *********** 2025-07-04 18:04:30.240932 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:04:30.242176 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:04:30.243685 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:04:30.245049 | orchestrator | ok: [testbed-manager] 2025-07-04 18:04:30.245942 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:04:30.247806 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:04:30.248984 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:04:30.250222 | orchestrator | 2025-07-04 18:04:30.250990 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-04 18:04:30.252272 | orchestrator | 2025-07-04 18:04:30.252873 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-04 18:04:30.253893 | orchestrator | Friday 04 July 2025 18:04:30 +0000 (0:00:04.946) 0:00:07.670 *********** 2025-07-04 18:04:30.435328 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:04:30.525044 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:04:30.601148 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:04:30.680952 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:04:30.759246 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:04:30.811581 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:04:30.813005 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:04:30.814072 | orchestrator | 2025-07-04 18:04:30.814740 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:04:30.815828 | orchestrator | 2025-07-04 18:04:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 18:04:30.815855 | orchestrator | 2025-07-04 18:04:30 | INFO  | Please wait and do not abort execution. 2025-07-04 18:04:30.816962 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:04:30.817544 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:04:30.818120 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:04:30.818760 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:04:30.819871 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:04:30.820623 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:04:30.821689 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:04:30.822734 | orchestrator | 2025-07-04 18:04:30.823436 | orchestrator | 2025-07-04 18:04:30.824447 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:04:30.825154 | orchestrator | Friday 04 July 2025 18:04:30 +0000 (0:00:00.571) 0:00:08.241 *********** 2025-07-04 18:04:30.825840 | orchestrator | =============================================================================== 2025-07-04 18:04:30.826738 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.95s 2025-07-04 18:04:30.827218 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2025-07-04 18:04:30.827797 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2025-07-04 18:04:30.828553 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-07-04 18:04:31.449754 | orchestrator | 2025-07-04 18:04:31.451759 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Jul 4 18:04:31 UTC 2025 2025-07-04 18:04:31.451815 | orchestrator | 2025-07-04 18:04:33.178618 | orchestrator | 2025-07-04 18:04:33 | INFO  | Collection nutshell is prepared for execution 2025-07-04 18:04:33.178744 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [0] - dotfiles 2025-07-04 18:04:33.181558 | orchestrator | Registering Redlock._acquired_script 2025-07-04 18:04:33.181635 | orchestrator | Registering Redlock._extend_script 2025-07-04 18:04:33.181884 | orchestrator | Registering Redlock._release_script 2025-07-04 18:04:33.189295 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [0] - homer 2025-07-04 18:04:33.189359 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [0] - netdata 2025-07-04 18:04:33.189427 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [0] - openstackclient 2025-07-04 18:04:33.189843 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [0] - phpmyadmin 2025-07-04 18:04:33.190208 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [0] - common 2025-07-04 18:04:33.193824 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [1] -- loadbalancer 2025-07-04 18:04:33.193883 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [2] --- opensearch 2025-07-04 18:04:33.193991 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [2] --- mariadb-ng 2025-07-04 18:04:33.194421 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [3] ---- horizon 2025-07-04 18:04:33.195051 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [3] ---- keystone 2025-07-04 18:04:33.195095 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [4] ----- neutron 2025-07-04 18:04:33.195523 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [5] ------ wait-for-nova 2025-07-04 18:04:33.195551 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [5] ------ octavia 2025-07-04 18:04:33.198279 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [4] ----- barbican 2025-07-04 18:04:33.198340 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [4] ----- designate 2025-07-04 18:04:33.198351 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [4] ----- ironic 2025-07-04 18:04:33.198360 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [4] ----- placement 2025-07-04 18:04:33.198370 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [4] ----- magnum 2025-07-04 18:04:33.198380 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [1] -- openvswitch 2025-07-04 18:04:33.198389 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [2] --- ovn 2025-07-04 18:04:33.198399 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [1] -- memcached 2025-07-04 18:04:33.198408 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [1] -- redis 2025-07-04 18:04:33.198506 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [1] -- rabbitmq-ng 2025-07-04 18:04:33.198991 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [0] - kubernetes 2025-07-04 18:04:33.201771 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [1] -- kubeconfig 2025-07-04 18:04:33.202065 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [1] -- copy-kubeconfig 2025-07-04 18:04:33.202157 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [0] - ceph 2025-07-04 18:04:33.204862 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [1] -- ceph-pools 2025-07-04 18:04:33.204890 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [2] --- copy-ceph-keys 2025-07-04 18:04:33.204900 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [3] ---- cephclient 2025-07-04 18:04:33.204910 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-07-04 18:04:33.205454 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [4] ----- wait-for-keystone 2025-07-04 18:04:33.205480 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [5] ------ kolla-ceph-rgw 2025-07-04 18:04:33.205700 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [5] ------ glance 2025-07-04 18:04:33.205719 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [5] ------ cinder 2025-07-04 18:04:33.206129 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [5] ------ nova 2025-07-04 18:04:33.206152 | orchestrator | 2025-07-04 18:04:33 | INFO  | A [4] ----- prometheus 2025-07-04 18:04:33.206161 | orchestrator | 2025-07-04 18:04:33 | INFO  | D [5] ------ grafana 2025-07-04 18:04:33.451468 | orchestrator | 2025-07-04 18:04:33 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-07-04 18:04:33.451582 | orchestrator | 2025-07-04 18:04:33 | INFO  | Tasks are running in the background 2025-07-04 18:04:36.495248 | orchestrator | 2025-07-04 18:04:36 | INFO  | No task IDs specified, wait for all currently running tasks 2025-07-04 18:04:38.640477 | orchestrator | 2025-07-04 18:04:38 | INFO  | Task eaa398c3-cfe2-4b19-a4d8-3962f9961e87 is in state STARTED 2025-07-04 18:04:38.641463 | orchestrator | 2025-07-04 18:04:38 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:04:38.642361 | orchestrator | 2025-07-04 18:04:38 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:04:38.646560 | orchestrator | 2025-07-04 18:04:38 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:04:38.647366 | orchestrator | 2025-07-04 18:04:38 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:04:38.648194 | orchestrator | 2025-07-04 18:04:38 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:04:38.649254 | orchestrator | 2025-07-04 18:04:38 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:04:38.649434 | orchestrator | 2025-07-04 18:04:38 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:04:41.697172 | orchestrator | 2025-07-04 18:04:41 | INFO  | Task eaa398c3-cfe2-4b19-a4d8-3962f9961e87 is in state STARTED 2025-07-04 18:04:41.697828 | orchestrator | 2025-07-04 18:04:41 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:04:41.702316 | orchestrator | 2025-07-04 18:04:41 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:04:41.703288 | orchestrator | 2025-07-04 18:04:41 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:04:41.704034 | orchestrator | 2025-07-04 18:04:41 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:04:41.706160 | orchestrator | 2025-07-04 18:04:41 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:04:41.708381 | orchestrator | 2025-07-04 18:04:41 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:04:41.711913 | orchestrator | 2025-07-04 18:04:41 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:04:44.756349 | orchestrator | 2025-07-04 18:04:44 | INFO  | Task eaa398c3-cfe2-4b19-a4d8-3962f9961e87 is in state STARTED 2025-07-04 18:04:44.757888 | orchestrator | 2025-07-04 18:04:44 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:04:44.759718 | orchestrator | 2025-07-04 18:04:44 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:04:44.764501 | orchestrator | 2025-07-04 18:04:44 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:04:44.764538 | orchestrator | 2025-07-04 18:04:44 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:04:44.767456 | orchestrator | 2025-07-04 18:04:44 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:04:44.767479 | orchestrator | 2025-07-04 18:04:44 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:04:44.767499 | orchestrator | 2025-07-04 18:04:44 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:04:47.808947 | orchestrator | 2025-07-04 18:04:47 | INFO  | Task eaa398c3-cfe2-4b19-a4d8-3962f9961e87 is in state STARTED 2025-07-04 18:04:47.809042 | orchestrator | 2025-07-04 18:04:47 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:04:47.809062 | orchestrator | 2025-07-04 18:04:47 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:04:47.809079 | orchestrator | 2025-07-04 18:04:47 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:04:47.810708 | orchestrator | 2025-07-04 18:04:47 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:04:47.813769 | orchestrator | 2025-07-04 18:04:47 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:04:47.813884 | orchestrator | 2025-07-04 18:04:47 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:04:47.814280 | orchestrator | 2025-07-04 18:04:47 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:04:50.914302 | orchestrator | 2025-07-04 18:04:50 | INFO  | Task eaa398c3-cfe2-4b19-a4d8-3962f9961e87 is in state STARTED 2025-07-04 18:04:50.916126 | orchestrator | 2025-07-04 18:04:50 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:04:50.917835 | orchestrator | 2025-07-04 18:04:50 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:04:50.922883 | orchestrator | 2025-07-04 18:04:50 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:04:50.923824 | orchestrator | 2025-07-04 18:04:50 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:04:50.928320 | orchestrator | 2025-07-04 18:04:50 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:04:50.928356 | orchestrator | 2025-07-04 18:04:50 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:04:50.928364 | orchestrator | 2025-07-04 18:04:50 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:04:53.971734 | orchestrator | 2025-07-04 18:04:53 | INFO  | Task eaa398c3-cfe2-4b19-a4d8-3962f9961e87 is in state STARTED 2025-07-04 18:04:53.976330 | orchestrator | 2025-07-04 18:04:53 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:04:53.981870 | orchestrator | 2025-07-04 18:04:53 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:04:53.987330 | orchestrator | 2025-07-04 18:04:53 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:04:53.988258 | orchestrator | 2025-07-04 18:04:53 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:04:53.990126 | orchestrator | 2025-07-04 18:04:53 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:04:53.990902 | orchestrator | 2025-07-04 18:04:53 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:04:53.990979 | orchestrator | 2025-07-04 18:04:53 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:04:57.048762 | orchestrator | 2025-07-04 18:04:57 | INFO  | Task eaa398c3-cfe2-4b19-a4d8-3962f9961e87 is in state STARTED 2025-07-04 18:04:57.048919 | orchestrator | 2025-07-04 18:04:57 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:04:57.049836 | orchestrator | 2025-07-04 18:04:57 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:04:57.052893 | orchestrator | 2025-07-04 18:04:57 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:04:57.053408 | orchestrator | 2025-07-04 18:04:57 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:04:57.053902 | orchestrator | 2025-07-04 18:04:57 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:04:57.058712 | orchestrator | 2025-07-04 18:04:57 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:04:57.058754 | orchestrator | 2025-07-04 18:04:57 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:00.134857 | orchestrator | 2025-07-04 18:05:00 | INFO  | Task eaa398c3-cfe2-4b19-a4d8-3962f9961e87 is in state STARTED 2025-07-04 18:05:00.134947 | orchestrator | 2025-07-04 18:05:00 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:05:00.134961 | orchestrator | 2025-07-04 18:05:00 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:00.134973 | orchestrator | 2025-07-04 18:05:00 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:00.134984 | orchestrator | 2025-07-04 18:05:00 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:05:00.134995 | orchestrator | 2025-07-04 18:05:00 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:00.135005 | orchestrator | 2025-07-04 18:05:00 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:00.135016 | orchestrator | 2025-07-04 18:05:00 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:03.176261 | orchestrator | 2025-07-04 18:05:03 | INFO  | Task eaa398c3-cfe2-4b19-a4d8-3962f9961e87 is in state STARTED 2025-07-04 18:05:03.176903 | orchestrator | 2025-07-04 18:05:03 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:05:03.183871 | orchestrator | 2025-07-04 18:05:03 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:03.187203 | orchestrator | 2025-07-04 18:05:03 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:03.191235 | orchestrator | 2025-07-04 18:05:03 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:05:03.194194 | orchestrator | 2025-07-04 18:05:03 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:03.194469 | orchestrator | 2025-07-04 18:05:03 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:03.194507 | orchestrator | 2025-07-04 18:05:03 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:06.257689 | orchestrator | 2025-07-04 18:05:06 | INFO  | Task eaa398c3-cfe2-4b19-a4d8-3962f9961e87 is in state SUCCESS 2025-07-04 18:05:06.258985 | orchestrator | 2025-07-04 18:05:06.259024 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-07-04 18:05:06.259037 | orchestrator | 2025-07-04 18:05:06.259049 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-07-04 18:05:06.259061 | orchestrator | Friday 04 July 2025 18:04:47 +0000 (0:00:00.976) 0:00:00.976 *********** 2025-07-04 18:05:06.259101 | orchestrator | changed: [testbed-manager] 2025-07-04 18:05:06.259115 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:05:06.259126 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:05:06.259136 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:05:06.259147 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:05:06.259158 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:05:06.259168 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:05:06.259179 | orchestrator | 2025-07-04 18:05:06.259190 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-07-04 18:05:06.259202 | orchestrator | Friday 04 July 2025 18:04:52 +0000 (0:00:04.810) 0:00:05.787 *********** 2025-07-04 18:05:06.259214 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-04 18:05:06.259226 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-04 18:05:06.259236 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-04 18:05:06.259247 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-04 18:05:06.259266 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-04 18:05:06.259277 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-04 18:05:06.259313 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-04 18:05:06.259324 | orchestrator | 2025-07-04 18:05:06.259335 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-07-04 18:05:06.259346 | orchestrator | Friday 04 July 2025 18:04:54 +0000 (0:00:02.012) 0:00:07.799 *********** 2025-07-04 18:05:06.259360 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-04 18:04:52.861824', 'end': '2025-07-04 18:04:52.865356', 'delta': '0:00:00.003532', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-04 18:05:06.259376 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-04 18:04:52.954645', 'end': '2025-07-04 18:04:52.965477', 'delta': '0:00:00.010832', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-04 18:05:06.259388 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-04 18:04:52.930689', 'end': '2025-07-04 18:04:52.940099', 'delta': '0:00:00.009410', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-04 18:05:06.259424 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-04 18:04:53.166173', 'end': '2025-07-04 18:04:53.174812', 'delta': '0:00:00.008639', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-04 18:05:06.259442 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-04 18:04:53.457933', 'end': '2025-07-04 18:04:53.465862', 'delta': '0:00:00.007929', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-04 18:05:06.259467 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-04 18:04:53.628428', 'end': '2025-07-04 18:04:53.639250', 'delta': '0:00:00.010822', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-04 18:05:06.259479 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-04 18:04:53.857924', 'end': '2025-07-04 18:04:53.865959', 'delta': '0:00:00.008035', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-04 18:05:06.259490 | orchestrator | 2025-07-04 18:05:06.259501 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-07-04 18:05:06.259512 | orchestrator | Friday 04 July 2025 18:04:56 +0000 (0:00:02.833) 0:00:10.633 *********** 2025-07-04 18:05:06.259523 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-04 18:05:06.259534 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-04 18:05:06.259545 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-04 18:05:06.259556 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-04 18:05:06.259566 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-04 18:05:06.259580 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-04 18:05:06.259593 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-04 18:05:06.259605 | orchestrator | 2025-07-04 18:05:06.259617 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-07-04 18:05:06.259629 | orchestrator | Friday 04 July 2025 18:04:59 +0000 (0:00:02.285) 0:00:12.918 *********** 2025-07-04 18:05:06.259642 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-07-04 18:05:06.259655 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-07-04 18:05:06.259668 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-07-04 18:05:06.259681 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-07-04 18:05:06.259694 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-07-04 18:05:06.259707 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-07-04 18:05:06.259719 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-07-04 18:05:06.259732 | orchestrator | 2025-07-04 18:05:06.259744 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:05:06.259765 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:05:06.259787 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:05:06.259800 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:05:06.259813 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:05:06.259826 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:05:06.259839 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:05:06.259852 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:05:06.259865 | orchestrator | 2025-07-04 18:05:06.259877 | orchestrator | 2025-07-04 18:05:06.259890 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:05:06.259904 | orchestrator | Friday 04 July 2025 18:05:03 +0000 (0:00:04.736) 0:00:17.654 *********** 2025-07-04 18:05:06.259916 | orchestrator | =============================================================================== 2025-07-04 18:05:06.259930 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.81s 2025-07-04 18:05:06.259941 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.74s 2025-07-04 18:05:06.259952 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.84s 2025-07-04 18:05:06.259962 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.28s 2025-07-04 18:05:06.259973 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.01s 2025-07-04 18:05:06.260912 | orchestrator | 2025-07-04 18:05:06 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:05:06.262188 | orchestrator | 2025-07-04 18:05:06 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:06.264580 | orchestrator | 2025-07-04 18:05:06 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:06.268026 | orchestrator | 2025-07-04 18:05:06 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:06.273146 | orchestrator | 2025-07-04 18:05:06 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:05:06.273198 | orchestrator | 2025-07-04 18:05:06 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:06.274493 | orchestrator | 2025-07-04 18:05:06 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:06.275351 | orchestrator | 2025-07-04 18:05:06 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:09.336434 | orchestrator | 2025-07-04 18:05:09 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:05:09.339311 | orchestrator | 2025-07-04 18:05:09 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:09.340632 | orchestrator | 2025-07-04 18:05:09 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:09.341819 | orchestrator | 2025-07-04 18:05:09 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:09.345154 | orchestrator | 2025-07-04 18:05:09 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:05:09.346723 | orchestrator | 2025-07-04 18:05:09 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:09.348586 | orchestrator | 2025-07-04 18:05:09 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:09.348625 | orchestrator | 2025-07-04 18:05:09 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:12.394830 | orchestrator | 2025-07-04 18:05:12 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:05:12.395027 | orchestrator | 2025-07-04 18:05:12 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:12.401029 | orchestrator | 2025-07-04 18:05:12 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:12.401114 | orchestrator | 2025-07-04 18:05:12 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:12.401375 | orchestrator | 2025-07-04 18:05:12 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:05:12.401838 | orchestrator | 2025-07-04 18:05:12 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:12.403667 | orchestrator | 2025-07-04 18:05:12 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:12.403729 | orchestrator | 2025-07-04 18:05:12 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:15.466226 | orchestrator | 2025-07-04 18:05:15 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:05:15.469120 | orchestrator | 2025-07-04 18:05:15 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:15.472178 | orchestrator | 2025-07-04 18:05:15 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:15.472223 | orchestrator | 2025-07-04 18:05:15 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:15.475398 | orchestrator | 2025-07-04 18:05:15 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:05:15.477159 | orchestrator | 2025-07-04 18:05:15 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:15.486620 | orchestrator | 2025-07-04 18:05:15 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:15.486693 | orchestrator | 2025-07-04 18:05:15 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:18.552665 | orchestrator | 2025-07-04 18:05:18 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:05:18.552903 | orchestrator | 2025-07-04 18:05:18 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:18.554679 | orchestrator | 2025-07-04 18:05:18 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:18.555412 | orchestrator | 2025-07-04 18:05:18 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:18.556401 | orchestrator | 2025-07-04 18:05:18 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:05:18.557550 | orchestrator | 2025-07-04 18:05:18 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:18.559136 | orchestrator | 2025-07-04 18:05:18 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:18.559190 | orchestrator | 2025-07-04 18:05:18 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:21.617953 | orchestrator | 2025-07-04 18:05:21 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:05:21.618653 | orchestrator | 2025-07-04 18:05:21 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:21.619518 | orchestrator | 2025-07-04 18:05:21 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:21.622656 | orchestrator | 2025-07-04 18:05:21 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:21.624934 | orchestrator | 2025-07-04 18:05:21 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state STARTED 2025-07-04 18:05:21.625761 | orchestrator | 2025-07-04 18:05:21 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:21.627228 | orchestrator | 2025-07-04 18:05:21 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:21.627295 | orchestrator | 2025-07-04 18:05:21 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:24.661589 | orchestrator | 2025-07-04 18:05:24 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:05:24.663866 | orchestrator | 2025-07-04 18:05:24 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:24.664504 | orchestrator | 2025-07-04 18:05:24 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:24.665365 | orchestrator | 2025-07-04 18:05:24 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:24.665761 | orchestrator | 2025-07-04 18:05:24 | INFO  | Task 261d55e6-6da6-4fa9-bf28-a93843f9191f is in state SUCCESS 2025-07-04 18:05:24.668861 | orchestrator | 2025-07-04 18:05:24 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:24.669630 | orchestrator | 2025-07-04 18:05:24 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:24.669641 | orchestrator | 2025-07-04 18:05:24 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:27.722279 | orchestrator | 2025-07-04 18:05:27 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:05:27.722359 | orchestrator | 2025-07-04 18:05:27 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:27.724368 | orchestrator | 2025-07-04 18:05:27 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:27.725143 | orchestrator | 2025-07-04 18:05:27 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:27.730977 | orchestrator | 2025-07-04 18:05:27 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:27.733652 | orchestrator | 2025-07-04 18:05:27 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:27.733693 | orchestrator | 2025-07-04 18:05:27 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:30.787575 | orchestrator | 2025-07-04 18:05:30 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:05:30.793505 | orchestrator | 2025-07-04 18:05:30 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:30.793563 | orchestrator | 2025-07-04 18:05:30 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:30.794117 | orchestrator | 2025-07-04 18:05:30 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:30.798297 | orchestrator | 2025-07-04 18:05:30 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:30.801835 | orchestrator | 2025-07-04 18:05:30 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:30.801934 | orchestrator | 2025-07-04 18:05:30 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:33.845366 | orchestrator | 2025-07-04 18:05:33 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state STARTED 2025-07-04 18:05:33.845439 | orchestrator | 2025-07-04 18:05:33 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:33.845685 | orchestrator | 2025-07-04 18:05:33 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:33.846508 | orchestrator | 2025-07-04 18:05:33 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:33.847418 | orchestrator | 2025-07-04 18:05:33 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:33.848702 | orchestrator | 2025-07-04 18:05:33 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:33.848721 | orchestrator | 2025-07-04 18:05:33 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:36.889192 | orchestrator | 2025-07-04 18:05:36 | INFO  | Task e0e76211-7573-4a08-b15a-ce28d5a098c5 is in state SUCCESS 2025-07-04 18:05:36.893481 | orchestrator | 2025-07-04 18:05:36 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:36.893546 | orchestrator | 2025-07-04 18:05:36 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:36.893561 | orchestrator | 2025-07-04 18:05:36 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:36.893727 | orchestrator | 2025-07-04 18:05:36 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:36.896914 | orchestrator | 2025-07-04 18:05:36 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:36.896951 | orchestrator | 2025-07-04 18:05:36 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:39.942215 | orchestrator | 2025-07-04 18:05:39 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:39.943841 | orchestrator | 2025-07-04 18:05:39 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:39.952144 | orchestrator | 2025-07-04 18:05:39 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:39.952216 | orchestrator | 2025-07-04 18:05:39 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:39.953399 | orchestrator | 2025-07-04 18:05:39 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:39.953425 | orchestrator | 2025-07-04 18:05:39 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:43.000370 | orchestrator | 2025-07-04 18:05:42 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:43.002295 | orchestrator | 2025-07-04 18:05:42 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:43.008972 | orchestrator | 2025-07-04 18:05:43 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:43.011709 | orchestrator | 2025-07-04 18:05:43 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:43.011778 | orchestrator | 2025-07-04 18:05:43 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:43.011799 | orchestrator | 2025-07-04 18:05:43 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:46.079761 | orchestrator | 2025-07-04 18:05:46 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:46.081949 | orchestrator | 2025-07-04 18:05:46 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:46.088827 | orchestrator | 2025-07-04 18:05:46 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:46.094877 | orchestrator | 2025-07-04 18:05:46 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:46.094901 | orchestrator | 2025-07-04 18:05:46 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:46.094907 | orchestrator | 2025-07-04 18:05:46 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:49.147392 | orchestrator | 2025-07-04 18:05:49 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:49.151346 | orchestrator | 2025-07-04 18:05:49 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:49.157121 | orchestrator | 2025-07-04 18:05:49 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:49.158263 | orchestrator | 2025-07-04 18:05:49 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:49.162196 | orchestrator | 2025-07-04 18:05:49 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:49.162243 | orchestrator | 2025-07-04 18:05:49 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:52.218882 | orchestrator | 2025-07-04 18:05:52 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:52.218926 | orchestrator | 2025-07-04 18:05:52 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:52.218935 | orchestrator | 2025-07-04 18:05:52 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:52.219940 | orchestrator | 2025-07-04 18:05:52 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:52.220790 | orchestrator | 2025-07-04 18:05:52 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:52.223541 | orchestrator | 2025-07-04 18:05:52 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:55.263674 | orchestrator | 2025-07-04 18:05:55 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:55.264012 | orchestrator | 2025-07-04 18:05:55 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:55.266268 | orchestrator | 2025-07-04 18:05:55 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:55.268341 | orchestrator | 2025-07-04 18:05:55 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:55.268381 | orchestrator | 2025-07-04 18:05:55 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:55.268420 | orchestrator | 2025-07-04 18:05:55 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:05:58.304349 | orchestrator | 2025-07-04 18:05:58 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:05:58.305346 | orchestrator | 2025-07-04 18:05:58 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state STARTED 2025-07-04 18:05:58.307102 | orchestrator | 2025-07-04 18:05:58 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:05:58.310418 | orchestrator | 2025-07-04 18:05:58 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:05:58.312202 | orchestrator | 2025-07-04 18:05:58 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:05:58.312419 | orchestrator | 2025-07-04 18:05:58 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:01.378239 | orchestrator | 2025-07-04 18:06:01 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:06:01.382612 | orchestrator | 2025-07-04 18:06:01.382685 | orchestrator | 2025-07-04 18:06:01.382708 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-07-04 18:06:01.382729 | orchestrator | 2025-07-04 18:06:01.382747 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-07-04 18:06:01.382771 | orchestrator | Friday 04 July 2025 18:04:47 +0000 (0:00:00.939) 0:00:00.939 *********** 2025-07-04 18:06:01.382790 | orchestrator | ok: [testbed-manager] => { 2025-07-04 18:06:01.382812 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-07-04 18:06:01.382831 | orchestrator | } 2025-07-04 18:06:01.382849 | orchestrator | 2025-07-04 18:06:01.382866 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-07-04 18:06:01.382883 | orchestrator | Friday 04 July 2025 18:04:48 +0000 (0:00:00.882) 0:00:01.821 *********** 2025-07-04 18:06:01.382900 | orchestrator | ok: [testbed-manager] 2025-07-04 18:06:01.382920 | orchestrator | 2025-07-04 18:06:01.382939 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-07-04 18:06:01.382957 | orchestrator | Friday 04 July 2025 18:04:50 +0000 (0:00:01.869) 0:00:03.691 *********** 2025-07-04 18:06:01.382977 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-07-04 18:06:01.382990 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-07-04 18:06:01.383001 | orchestrator | 2025-07-04 18:06:01.383015 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-07-04 18:06:01.383076 | orchestrator | Friday 04 July 2025 18:04:51 +0000 (0:00:00.980) 0:00:04.671 *********** 2025-07-04 18:06:01.383094 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.383110 | orchestrator | 2025-07-04 18:06:01.383130 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-07-04 18:06:01.383149 | orchestrator | Friday 04 July 2025 18:04:53 +0000 (0:00:02.312) 0:00:06.983 *********** 2025-07-04 18:06:01.383167 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.383184 | orchestrator | 2025-07-04 18:06:01.383197 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-07-04 18:06:01.383210 | orchestrator | Friday 04 July 2025 18:04:56 +0000 (0:00:02.122) 0:00:09.105 *********** 2025-07-04 18:06:01.383222 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-07-04 18:06:01.383236 | orchestrator | ok: [testbed-manager] 2025-07-04 18:06:01.383249 | orchestrator | 2025-07-04 18:06:01.383261 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-07-04 18:06:01.383274 | orchestrator | Friday 04 July 2025 18:05:20 +0000 (0:00:24.873) 0:00:33.978 *********** 2025-07-04 18:06:01.383287 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.383299 | orchestrator | 2025-07-04 18:06:01.383312 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:06:01.383325 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:06:01.383340 | orchestrator | 2025-07-04 18:06:01.383353 | orchestrator | 2025-07-04 18:06:01.383365 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:06:01.383378 | orchestrator | Friday 04 July 2025 18:05:22 +0000 (0:00:01.444) 0:00:35.423 *********** 2025-07-04 18:06:01.383390 | orchestrator | =============================================================================== 2025-07-04 18:06:01.383404 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.87s 2025-07-04 18:06:01.383417 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.31s 2025-07-04 18:06:01.383430 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.12s 2025-07-04 18:06:01.383443 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.87s 2025-07-04 18:06:01.383475 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.44s 2025-07-04 18:06:01.383488 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.98s 2025-07-04 18:06:01.383501 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.88s 2025-07-04 18:06:01.383514 | orchestrator | 2025-07-04 18:06:01.383526 | orchestrator | 2025-07-04 18:06:01.383539 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-07-04 18:06:01.383551 | orchestrator | 2025-07-04 18:06:01.383562 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-07-04 18:06:01.383573 | orchestrator | Friday 04 July 2025 18:04:48 +0000 (0:00:01.118) 0:00:01.118 *********** 2025-07-04 18:06:01.383584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-07-04 18:06:01.383596 | orchestrator | 2025-07-04 18:06:01.383607 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-07-04 18:06:01.383618 | orchestrator | Friday 04 July 2025 18:04:48 +0000 (0:00:00.407) 0:00:01.526 *********** 2025-07-04 18:06:01.383628 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-07-04 18:06:01.383639 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-07-04 18:06:01.383650 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-07-04 18:06:01.383660 | orchestrator | 2025-07-04 18:06:01.383671 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-07-04 18:06:01.383681 | orchestrator | Friday 04 July 2025 18:04:50 +0000 (0:00:01.844) 0:00:03.370 *********** 2025-07-04 18:06:01.383692 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.383703 | orchestrator | 2025-07-04 18:06:01.383715 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-07-04 18:06:01.383726 | orchestrator | Friday 04 July 2025 18:04:52 +0000 (0:00:01.572) 0:00:04.943 *********** 2025-07-04 18:06:01.383753 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-07-04 18:06:01.383764 | orchestrator | ok: [testbed-manager] 2025-07-04 18:06:01.383775 | orchestrator | 2025-07-04 18:06:01.383786 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-07-04 18:06:01.383807 | orchestrator | Friday 04 July 2025 18:05:31 +0000 (0:00:39.258) 0:00:44.202 *********** 2025-07-04 18:06:01.383826 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.383845 | orchestrator | 2025-07-04 18:06:01.383864 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-07-04 18:06:01.383877 | orchestrator | Friday 04 July 2025 18:05:32 +0000 (0:00:00.790) 0:00:44.993 *********** 2025-07-04 18:06:01.383888 | orchestrator | ok: [testbed-manager] 2025-07-04 18:06:01.383899 | orchestrator | 2025-07-04 18:06:01.383909 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-07-04 18:06:01.383920 | orchestrator | Friday 04 July 2025 18:05:32 +0000 (0:00:00.544) 0:00:45.538 *********** 2025-07-04 18:06:01.383931 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.383948 | orchestrator | 2025-07-04 18:06:01.383966 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-07-04 18:06:01.383984 | orchestrator | Friday 04 July 2025 18:05:34 +0000 (0:00:01.578) 0:00:47.116 *********** 2025-07-04 18:06:01.384001 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.384018 | orchestrator | 2025-07-04 18:06:01.384065 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-07-04 18:06:01.384083 | orchestrator | Friday 04 July 2025 18:05:35 +0000 (0:00:00.757) 0:00:47.873 *********** 2025-07-04 18:06:01.384102 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.384122 | orchestrator | 2025-07-04 18:06:01.384141 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-07-04 18:06:01.384163 | orchestrator | Friday 04 July 2025 18:05:35 +0000 (0:00:00.535) 0:00:48.409 *********** 2025-07-04 18:06:01.384174 | orchestrator | ok: [testbed-manager] 2025-07-04 18:06:01.384185 | orchestrator | 2025-07-04 18:06:01.384195 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:06:01.384206 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:06:01.384217 | orchestrator | 2025-07-04 18:06:01.384228 | orchestrator | 2025-07-04 18:06:01.384239 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:06:01.384249 | orchestrator | Friday 04 July 2025 18:05:35 +0000 (0:00:00.342) 0:00:48.751 *********** 2025-07-04 18:06:01.384260 | orchestrator | =============================================================================== 2025-07-04 18:06:01.384271 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 39.26s 2025-07-04 18:06:01.384281 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.84s 2025-07-04 18:06:01.384292 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.58s 2025-07-04 18:06:01.384302 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.57s 2025-07-04 18:06:01.384313 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.79s 2025-07-04 18:06:01.384329 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.76s 2025-07-04 18:06:01.384348 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.54s 2025-07-04 18:06:01.384364 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.54s 2025-07-04 18:06:01.384380 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.41s 2025-07-04 18:06:01.384397 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.34s 2025-07-04 18:06:01.384414 | orchestrator | 2025-07-04 18:06:01.384430 | orchestrator | 2025-07-04 18:06:01.384447 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:06:01.384465 | orchestrator | 2025-07-04 18:06:01.384482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:06:01.384500 | orchestrator | Friday 04 July 2025 18:04:47 +0000 (0:00:00.755) 0:00:00.755 *********** 2025-07-04 18:06:01.384518 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-07-04 18:06:01.384536 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-07-04 18:06:01.384554 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-07-04 18:06:01.384572 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-07-04 18:06:01.384590 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-07-04 18:06:01.384609 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-07-04 18:06:01.384626 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-07-04 18:06:01.384644 | orchestrator | 2025-07-04 18:06:01.384681 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-07-04 18:06:01.384700 | orchestrator | 2025-07-04 18:06:01.384720 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-07-04 18:06:01.384739 | orchestrator | Friday 04 July 2025 18:04:49 +0000 (0:00:02.856) 0:00:03.612 *********** 2025-07-04 18:06:01.384773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:06:01.384794 | orchestrator | 2025-07-04 18:06:01.384805 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-07-04 18:06:01.384816 | orchestrator | Friday 04 July 2025 18:04:51 +0000 (0:00:02.026) 0:00:05.638 *********** 2025-07-04 18:06:01.384826 | orchestrator | ok: [testbed-manager] 2025-07-04 18:06:01.384837 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:06:01.384859 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:06:01.384870 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:06:01.384881 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:06:01.384904 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:06:01.384916 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:06:01.384926 | orchestrator | 2025-07-04 18:06:01.384997 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-07-04 18:06:01.385010 | orchestrator | Friday 04 July 2025 18:04:54 +0000 (0:00:02.319) 0:00:07.958 *********** 2025-07-04 18:06:01.385021 | orchestrator | ok: [testbed-manager] 2025-07-04 18:06:01.385207 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:06:01.385227 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:06:01.385237 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:06:01.385248 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:06:01.385302 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:06:01.385314 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:06:01.385325 | orchestrator | 2025-07-04 18:06:01.385337 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-07-04 18:06:01.385348 | orchestrator | Friday 04 July 2025 18:04:58 +0000 (0:00:04.360) 0:00:12.319 *********** 2025-07-04 18:06:01.385358 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.385370 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:06:01.385381 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:06:01.385391 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:06:01.385402 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:06:01.385413 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:06:01.385424 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:06:01.385434 | orchestrator | 2025-07-04 18:06:01.385445 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-07-04 18:06:01.385456 | orchestrator | Friday 04 July 2025 18:05:01 +0000 (0:00:03.365) 0:00:15.685 *********** 2025-07-04 18:06:01.385467 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.385477 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:06:01.385488 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:06:01.385499 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:06:01.385509 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:06:01.385520 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:06:01.385530 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:06:01.385541 | orchestrator | 2025-07-04 18:06:01.385552 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-07-04 18:06:01.385563 | orchestrator | Friday 04 July 2025 18:05:12 +0000 (0:00:10.564) 0:00:26.250 *********** 2025-07-04 18:06:01.385573 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:06:01.385584 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:06:01.385595 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:06:01.385605 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.385616 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:06:01.385627 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:06:01.385637 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:06:01.385648 | orchestrator | 2025-07-04 18:06:01.385658 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-07-04 18:06:01.385669 | orchestrator | Friday 04 July 2025 18:05:38 +0000 (0:00:25.717) 0:00:51.968 *********** 2025-07-04 18:06:01.385681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:06:01.385692 | orchestrator | 2025-07-04 18:06:01.385700 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-07-04 18:06:01.385708 | orchestrator | Friday 04 July 2025 18:05:39 +0000 (0:00:01.425) 0:00:53.393 *********** 2025-07-04 18:06:01.385715 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-07-04 18:06:01.385724 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-07-04 18:06:01.385732 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-07-04 18:06:01.385749 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-07-04 18:06:01.385757 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-07-04 18:06:01.385765 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-07-04 18:06:01.385772 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-07-04 18:06:01.385796 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-07-04 18:06:01.385813 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-07-04 18:06:01.385821 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-07-04 18:06:01.385828 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-07-04 18:06:01.385836 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-07-04 18:06:01.385844 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-07-04 18:06:01.385852 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-07-04 18:06:01.385860 | orchestrator | 2025-07-04 18:06:01.385868 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-07-04 18:06:01.385877 | orchestrator | Friday 04 July 2025 18:05:45 +0000 (0:00:05.823) 0:00:59.216 *********** 2025-07-04 18:06:01.385885 | orchestrator | ok: [testbed-manager] 2025-07-04 18:06:01.385893 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:06:01.385901 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:06:01.385909 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:06:01.385917 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:06:01.385925 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:06:01.385932 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:06:01.385940 | orchestrator | 2025-07-04 18:06:01.385948 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-07-04 18:06:01.385956 | orchestrator | Friday 04 July 2025 18:05:47 +0000 (0:00:01.572) 0:01:00.789 *********** 2025-07-04 18:06:01.385964 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.385972 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:06:01.385980 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:06:01.385988 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:06:01.385996 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:06:01.386004 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:06:01.386012 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:06:01.386139 | orchestrator | 2025-07-04 18:06:01.386147 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-07-04 18:06:01.386169 | orchestrator | Friday 04 July 2025 18:05:48 +0000 (0:00:01.906) 0:01:02.695 *********** 2025-07-04 18:06:01.386177 | orchestrator | ok: [testbed-manager] 2025-07-04 18:06:01.386185 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:06:01.386192 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:06:01.386200 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:06:01.386208 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:06:01.386216 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:06:01.386228 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:06:01.386236 | orchestrator | 2025-07-04 18:06:01.386244 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-07-04 18:06:01.386252 | orchestrator | Friday 04 July 2025 18:05:50 +0000 (0:00:01.798) 0:01:04.493 *********** 2025-07-04 18:06:01.386260 | orchestrator | ok: [testbed-manager] 2025-07-04 18:06:01.386268 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:06:01.386276 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:06:01.386283 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:06:01.386291 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:06:01.386299 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:06:01.386306 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:06:01.386314 | orchestrator | 2025-07-04 18:06:01.386322 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-07-04 18:06:01.386330 | orchestrator | Friday 04 July 2025 18:05:52 +0000 (0:00:01.732) 0:01:06.226 *********** 2025-07-04 18:06:01.386338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-07-04 18:06:01.386354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:06:01.386363 | orchestrator | 2025-07-04 18:06:01.386371 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-07-04 18:06:01.386379 | orchestrator | Friday 04 July 2025 18:05:53 +0000 (0:00:01.128) 0:01:07.354 *********** 2025-07-04 18:06:01.386387 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.386395 | orchestrator | 2025-07-04 18:06:01.386402 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-07-04 18:06:01.386410 | orchestrator | Friday 04 July 2025 18:05:55 +0000 (0:00:01.674) 0:01:09.029 *********** 2025-07-04 18:06:01.386418 | orchestrator | changed: [testbed-manager] 2025-07-04 18:06:01.386426 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:06:01.386434 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:06:01.386442 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:06:01.386449 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:06:01.386457 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:06:01.386465 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:06:01.386473 | orchestrator | 2025-07-04 18:06:01.386480 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:06:01.386489 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:06:01.386497 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:06:01.386505 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:06:01.386513 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:06:01.386521 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:06:01.386529 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:06:01.386537 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:06:01.386544 | orchestrator | 2025-07-04 18:06:01.386552 | orchestrator | 2025-07-04 18:06:01.386560 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:06:01.386568 | orchestrator | Friday 04 July 2025 18:05:58 +0000 (0:00:03.367) 0:01:12.397 *********** 2025-07-04 18:06:01.386576 | orchestrator | =============================================================================== 2025-07-04 18:06:01.386584 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 25.72s 2025-07-04 18:06:01.386591 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.56s 2025-07-04 18:06:01.386599 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.82s 2025-07-04 18:06:01.386607 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.36s 2025-07-04 18:06:01.386615 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.37s 2025-07-04 18:06:01.386623 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.37s 2025-07-04 18:06:01.386630 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.86s 2025-07-04 18:06:01.386638 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.32s 2025-07-04 18:06:01.386646 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.03s 2025-07-04 18:06:01.386658 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.91s 2025-07-04 18:06:01.386666 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.80s 2025-07-04 18:06:01.386680 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.73s 2025-07-04 18:06:01.386688 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.67s 2025-07-04 18:06:01.386696 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.57s 2025-07-04 18:06:01.386707 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.43s 2025-07-04 18:06:01.386716 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.13s 2025-07-04 18:06:01.386724 | orchestrator | 2025-07-04 18:06:01 | INFO  | Task 8772adf5-2caf-44e0-b513-44578a2ff97c is in state SUCCESS 2025-07-04 18:06:01.386732 | orchestrator | 2025-07-04 18:06:01 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:01.386740 | orchestrator | 2025-07-04 18:06:01 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:01.388841 | orchestrator | 2025-07-04 18:06:01 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:01.388910 | orchestrator | 2025-07-04 18:06:01 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:04.422813 | orchestrator | 2025-07-04 18:06:04 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:06:04.424275 | orchestrator | 2025-07-04 18:06:04 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:04.427080 | orchestrator | 2025-07-04 18:06:04 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:04.428020 | orchestrator | 2025-07-04 18:06:04 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:04.428287 | orchestrator | 2025-07-04 18:06:04 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:07.474527 | orchestrator | 2025-07-04 18:06:07 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:06:07.474614 | orchestrator | 2025-07-04 18:06:07 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:07.479339 | orchestrator | 2025-07-04 18:06:07 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:07.482888 | orchestrator | 2025-07-04 18:06:07 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:07.482941 | orchestrator | 2025-07-04 18:06:07 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:10.542783 | orchestrator | 2025-07-04 18:06:10 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:06:10.544630 | orchestrator | 2025-07-04 18:06:10 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:10.546265 | orchestrator | 2025-07-04 18:06:10 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:10.547771 | orchestrator | 2025-07-04 18:06:10 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:10.547849 | orchestrator | 2025-07-04 18:06:10 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:13.600730 | orchestrator | 2025-07-04 18:06:13 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:06:13.602547 | orchestrator | 2025-07-04 18:06:13 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:13.603572 | orchestrator | 2025-07-04 18:06:13 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:13.604827 | orchestrator | 2025-07-04 18:06:13 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:13.604856 | orchestrator | 2025-07-04 18:06:13 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:16.663760 | orchestrator | 2025-07-04 18:06:16 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:06:16.664594 | orchestrator | 2025-07-04 18:06:16 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:16.668953 | orchestrator | 2025-07-04 18:06:16 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:16.672156 | orchestrator | 2025-07-04 18:06:16 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:16.672225 | orchestrator | 2025-07-04 18:06:16 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:19.716913 | orchestrator | 2025-07-04 18:06:19 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:06:19.717369 | orchestrator | 2025-07-04 18:06:19 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:19.718762 | orchestrator | 2025-07-04 18:06:19 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:19.722506 | orchestrator | 2025-07-04 18:06:19 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:19.722617 | orchestrator | 2025-07-04 18:06:19 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:22.786305 | orchestrator | 2025-07-04 18:06:22 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state STARTED 2025-07-04 18:06:22.790634 | orchestrator | 2025-07-04 18:06:22 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:22.791815 | orchestrator | 2025-07-04 18:06:22 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:22.793864 | orchestrator | 2025-07-04 18:06:22 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:22.793942 | orchestrator | 2025-07-04 18:06:22 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:25.844403 | orchestrator | 2025-07-04 18:06:25 | INFO  | Task 9808d906-08c4-466f-8fbf-19c9a61ce4d6 is in state SUCCESS 2025-07-04 18:06:25.848268 | orchestrator | 2025-07-04 18:06:25 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:25.852000 | orchestrator | 2025-07-04 18:06:25 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:25.853596 | orchestrator | 2025-07-04 18:06:25 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:25.853826 | orchestrator | 2025-07-04 18:06:25 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:28.905511 | orchestrator | 2025-07-04 18:06:28 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:28.907514 | orchestrator | 2025-07-04 18:06:28 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:28.908624 | orchestrator | 2025-07-04 18:06:28 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:28.909037 | orchestrator | 2025-07-04 18:06:28 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:31.960438 | orchestrator | 2025-07-04 18:06:31 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:31.963103 | orchestrator | 2025-07-04 18:06:31 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:31.965708 | orchestrator | 2025-07-04 18:06:31 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:31.965837 | orchestrator | 2025-07-04 18:06:31 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:35.015249 | orchestrator | 2025-07-04 18:06:35 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:35.017677 | orchestrator | 2025-07-04 18:06:35 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:35.018425 | orchestrator | 2025-07-04 18:06:35 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:35.018454 | orchestrator | 2025-07-04 18:06:35 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:38.049735 | orchestrator | 2025-07-04 18:06:38 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:38.051187 | orchestrator | 2025-07-04 18:06:38 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:38.053232 | orchestrator | 2025-07-04 18:06:38 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:38.053282 | orchestrator | 2025-07-04 18:06:38 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:41.096830 | orchestrator | 2025-07-04 18:06:41 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:41.098765 | orchestrator | 2025-07-04 18:06:41 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:41.101837 | orchestrator | 2025-07-04 18:06:41 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:41.101908 | orchestrator | 2025-07-04 18:06:41 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:44.137436 | orchestrator | 2025-07-04 18:06:44 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:44.138434 | orchestrator | 2025-07-04 18:06:44 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:44.139905 | orchestrator | 2025-07-04 18:06:44 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:44.139923 | orchestrator | 2025-07-04 18:06:44 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:47.179012 | orchestrator | 2025-07-04 18:06:47 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:47.179216 | orchestrator | 2025-07-04 18:06:47 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:47.180177 | orchestrator | 2025-07-04 18:06:47 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:47.180205 | orchestrator | 2025-07-04 18:06:47 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:50.214962 | orchestrator | 2025-07-04 18:06:50 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:50.217903 | orchestrator | 2025-07-04 18:06:50 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:50.220044 | orchestrator | 2025-07-04 18:06:50 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:50.220192 | orchestrator | 2025-07-04 18:06:50 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:53.258838 | orchestrator | 2025-07-04 18:06:53 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:53.260921 | orchestrator | 2025-07-04 18:06:53 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:53.261367 | orchestrator | 2025-07-04 18:06:53 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:53.261440 | orchestrator | 2025-07-04 18:06:53 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:56.310115 | orchestrator | 2025-07-04 18:06:56 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:56.312819 | orchestrator | 2025-07-04 18:06:56 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:56.314770 | orchestrator | 2025-07-04 18:06:56 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:56.316144 | orchestrator | 2025-07-04 18:06:56 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:06:59.364170 | orchestrator | 2025-07-04 18:06:59 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:06:59.365080 | orchestrator | 2025-07-04 18:06:59 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:06:59.368562 | orchestrator | 2025-07-04 18:06:59 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:06:59.368684 | orchestrator | 2025-07-04 18:06:59 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:02.406353 | orchestrator | 2025-07-04 18:07:02 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:02.407768 | orchestrator | 2025-07-04 18:07:02 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:02.409535 | orchestrator | 2025-07-04 18:07:02 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:02.409570 | orchestrator | 2025-07-04 18:07:02 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:05.464877 | orchestrator | 2025-07-04 18:07:05 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:05.467754 | orchestrator | 2025-07-04 18:07:05 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:05.470545 | orchestrator | 2025-07-04 18:07:05 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:05.470791 | orchestrator | 2025-07-04 18:07:05 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:08.536079 | orchestrator | 2025-07-04 18:07:08 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:08.536805 | orchestrator | 2025-07-04 18:07:08 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:08.538138 | orchestrator | 2025-07-04 18:07:08 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:08.538570 | orchestrator | 2025-07-04 18:07:08 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:11.596682 | orchestrator | 2025-07-04 18:07:11 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:11.602950 | orchestrator | 2025-07-04 18:07:11 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:11.603061 | orchestrator | 2025-07-04 18:07:11 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:11.603823 | orchestrator | 2025-07-04 18:07:11 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:14.665156 | orchestrator | 2025-07-04 18:07:14 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:14.665291 | orchestrator | 2025-07-04 18:07:14 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:14.668355 | orchestrator | 2025-07-04 18:07:14 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:14.668436 | orchestrator | 2025-07-04 18:07:14 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:17.721760 | orchestrator | 2025-07-04 18:07:17 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:17.723641 | orchestrator | 2025-07-04 18:07:17 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:17.727507 | orchestrator | 2025-07-04 18:07:17 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:17.727574 | orchestrator | 2025-07-04 18:07:17 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:20.776896 | orchestrator | 2025-07-04 18:07:20 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:20.777650 | orchestrator | 2025-07-04 18:07:20 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:20.778790 | orchestrator | 2025-07-04 18:07:20 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:20.778822 | orchestrator | 2025-07-04 18:07:20 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:23.839054 | orchestrator | 2025-07-04 18:07:23 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:23.839860 | orchestrator | 2025-07-04 18:07:23 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:23.841302 | orchestrator | 2025-07-04 18:07:23 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:23.842540 | orchestrator | 2025-07-04 18:07:23 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:26.906671 | orchestrator | 2025-07-04 18:07:26 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:26.907720 | orchestrator | 2025-07-04 18:07:26 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:26.908464 | orchestrator | 2025-07-04 18:07:26 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:26.908547 | orchestrator | 2025-07-04 18:07:26 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:29.966491 | orchestrator | 2025-07-04 18:07:29 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:29.970480 | orchestrator | 2025-07-04 18:07:29 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:29.973361 | orchestrator | 2025-07-04 18:07:29 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:29.973407 | orchestrator | 2025-07-04 18:07:29 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:33.031487 | orchestrator | 2025-07-04 18:07:33 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:33.031586 | orchestrator | 2025-07-04 18:07:33 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:33.033741 | orchestrator | 2025-07-04 18:07:33 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:33.033791 | orchestrator | 2025-07-04 18:07:33 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:36.089675 | orchestrator | 2025-07-04 18:07:36 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:36.101773 | orchestrator | 2025-07-04 18:07:36 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:36.108804 | orchestrator | 2025-07-04 18:07:36 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:36.108893 | orchestrator | 2025-07-04 18:07:36 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:39.159301 | orchestrator | 2025-07-04 18:07:39 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:39.159836 | orchestrator | 2025-07-04 18:07:39 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:39.161901 | orchestrator | 2025-07-04 18:07:39 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:39.162163 | orchestrator | 2025-07-04 18:07:39 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:42.211613 | orchestrator | 2025-07-04 18:07:42 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:42.213302 | orchestrator | 2025-07-04 18:07:42 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:42.214792 | orchestrator | 2025-07-04 18:07:42 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:42.215909 | orchestrator | 2025-07-04 18:07:42 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:45.285724 | orchestrator | 2025-07-04 18:07:45 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:45.287539 | orchestrator | 2025-07-04 18:07:45 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:45.290438 | orchestrator | 2025-07-04 18:07:45 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:45.290608 | orchestrator | 2025-07-04 18:07:45 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:48.340138 | orchestrator | 2025-07-04 18:07:48 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:48.341750 | orchestrator | 2025-07-04 18:07:48 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:48.343223 | orchestrator | 2025-07-04 18:07:48 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:48.343284 | orchestrator | 2025-07-04 18:07:48 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:51.406818 | orchestrator | 2025-07-04 18:07:51 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:51.408809 | orchestrator | 2025-07-04 18:07:51 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:51.410820 | orchestrator | 2025-07-04 18:07:51 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:51.410906 | orchestrator | 2025-07-04 18:07:51 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:54.482762 | orchestrator | 2025-07-04 18:07:54 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:54.482898 | orchestrator | 2025-07-04 18:07:54 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:54.482916 | orchestrator | 2025-07-04 18:07:54 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:54.482963 | orchestrator | 2025-07-04 18:07:54 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:07:57.525131 | orchestrator | 2025-07-04 18:07:57 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:07:57.525284 | orchestrator | 2025-07-04 18:07:57 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state STARTED 2025-07-04 18:07:57.526439 | orchestrator | 2025-07-04 18:07:57 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:07:57.526485 | orchestrator | 2025-07-04 18:07:57 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:00.575544 | orchestrator | 2025-07-04 18:08:00 | INFO  | Task b3b1dbd7-3e23-4d6d-9f64-a8a7811fc683 is in state STARTED 2025-07-04 18:08:00.576269 | orchestrator | 2025-07-04 18:08:00 | INFO  | Task b1d5955a-189a-43a1-a4f7-3ba9c16fa60d is in state STARTED 2025-07-04 18:08:00.578120 | orchestrator | 2025-07-04 18:08:00 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:00.579739 | orchestrator | 2025-07-04 18:08:00 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:00.588069 | orchestrator | 2025-07-04 18:08:00 | INFO  | Task 1fbe3cf7-b39a-439f-bfc5-7ee910003545 is in state SUCCESS 2025-07-04 18:08:00.591824 | orchestrator | 2025-07-04 18:08:00.591909 | orchestrator | 2025-07-04 18:08:00.591963 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-07-04 18:08:00.591984 | orchestrator | 2025-07-04 18:08:00.592001 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-07-04 18:08:00.592019 | orchestrator | Friday 04 July 2025 18:05:13 +0000 (0:00:00.941) 0:00:00.941 *********** 2025-07-04 18:08:00.592035 | orchestrator | ok: [testbed-manager] 2025-07-04 18:08:00.592053 | orchestrator | 2025-07-04 18:08:00.592069 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-07-04 18:08:00.592085 | orchestrator | Friday 04 July 2025 18:05:14 +0000 (0:00:01.522) 0:00:02.463 *********** 2025-07-04 18:08:00.592102 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-07-04 18:08:00.592118 | orchestrator | 2025-07-04 18:08:00.592134 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-07-04 18:08:00.592150 | orchestrator | Friday 04 July 2025 18:05:15 +0000 (0:00:00.632) 0:00:03.096 *********** 2025-07-04 18:08:00.592166 | orchestrator | changed: [testbed-manager] 2025-07-04 18:08:00.592182 | orchestrator | 2025-07-04 18:08:00.592202 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-07-04 18:08:00.592222 | orchestrator | Friday 04 July 2025 18:05:16 +0000 (0:00:01.402) 0:00:04.499 *********** 2025-07-04 18:08:00.592259 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-07-04 18:08:00.592287 | orchestrator | ok: [testbed-manager] 2025-07-04 18:08:00.592312 | orchestrator | 2025-07-04 18:08:00.592335 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-07-04 18:08:00.592354 | orchestrator | Friday 04 July 2025 18:06:21 +0000 (0:01:04.381) 0:01:08.880 *********** 2025-07-04 18:08:00.592370 | orchestrator | changed: [testbed-manager] 2025-07-04 18:08:00.592387 | orchestrator | 2025-07-04 18:08:00.592404 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:08:00.592421 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:08:00.592441 | orchestrator | 2025-07-04 18:08:00.592459 | orchestrator | 2025-07-04 18:08:00.592476 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:08:00.592493 | orchestrator | Friday 04 July 2025 18:06:24 +0000 (0:00:03.526) 0:01:12.406 *********** 2025-07-04 18:08:00.592505 | orchestrator | =============================================================================== 2025-07-04 18:08:00.592517 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 64.38s 2025-07-04 18:08:00.592529 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.53s 2025-07-04 18:08:00.592540 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.52s 2025-07-04 18:08:00.592551 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.40s 2025-07-04 18:08:00.592561 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.63s 2025-07-04 18:08:00.592570 | orchestrator | 2025-07-04 18:08:00.592580 | orchestrator | 2025-07-04 18:08:00.592589 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-07-04 18:08:00.592599 | orchestrator | 2025-07-04 18:08:00.592608 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-04 18:08:00.592637 | orchestrator | Friday 04 July 2025 18:04:38 +0000 (0:00:00.367) 0:00:00.367 *********** 2025-07-04 18:08:00.592648 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:08:00.592659 | orchestrator | 2025-07-04 18:08:00.592669 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-07-04 18:08:00.592678 | orchestrator | Friday 04 July 2025 18:04:40 +0000 (0:00:01.587) 0:00:01.954 *********** 2025-07-04 18:08:00.592688 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-04 18:08:00.592697 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-04 18:08:00.592707 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-04 18:08:00.592716 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-04 18:08:00.592726 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-04 18:08:00.592735 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-04 18:08:00.592745 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-04 18:08:00.592754 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-04 18:08:00.592763 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-04 18:08:00.592773 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-04 18:08:00.592782 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-04 18:08:00.592792 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-04 18:08:00.592803 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-04 18:08:00.592812 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-04 18:08:00.592822 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-04 18:08:00.592831 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-04 18:08:00.592858 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-04 18:08:00.592868 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-04 18:08:00.592878 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-04 18:08:00.592887 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-04 18:08:00.592897 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-04 18:08:00.592907 | orchestrator | 2025-07-04 18:08:00.592917 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-04 18:08:00.592989 | orchestrator | Friday 04 July 2025 18:04:45 +0000 (0:00:05.075) 0:00:07.029 *********** 2025-07-04 18:08:00.593007 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:08:00.593025 | orchestrator | 2025-07-04 18:08:00.593042 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-07-04 18:08:00.593186 | orchestrator | Friday 04 July 2025 18:04:46 +0000 (0:00:01.195) 0:00:08.224 *********** 2025-07-04 18:08:00.593206 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.593232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.593306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.593319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.593330 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.593372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593415 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.593429 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593449 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.593466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593548 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.593568 | orchestrator | 2025-07-04 18:08:00.593578 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-07-04 18:08:00.593594 | orchestrator | Friday 04 July 2025 18:04:51 +0000 (0:00:05.150) 0:00:13.375 *********** 2025-07-04 18:08:00.593604 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.593626 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.593636 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.593647 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:08:00.593657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.593667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.593677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.593687 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:08:00.593697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.593720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.593778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.593801 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:08:00.593834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.593852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.593869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.593884 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:08:00.593899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.593917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.593990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594069 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:08:00.594084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.594095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594115 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:08:00.594126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.594143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594190 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:08:00.594205 | orchestrator | 2025-07-04 18:08:00.594219 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-07-04 18:08:00.594234 | orchestrator | Friday 04 July 2025 18:04:52 +0000 (0:00:01.171) 0:00:14.547 *********** 2025-07-04 18:08:00.594267 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.594305 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594331 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594349 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:08:00.594367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.594385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594408 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:08:00.594418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.594428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594461 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:08:00.594471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.594485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594506 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:08:00.594516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.594526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594551 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:08:00.594561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.594578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594603 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:08:00.594614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-04 18:08:00.594624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.594644 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:08:00.594654 | orchestrator | 2025-07-04 18:08:00.594664 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-07-04 18:08:00.594673 | orchestrator | Friday 04 July 2025 18:04:55 +0000 (0:00:02.729) 0:00:17.276 *********** 2025-07-04 18:08:00.594689 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:08:00.594698 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:08:00.594708 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:08:00.594718 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:08:00.594727 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:08:00.594737 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:08:00.594746 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:08:00.594756 | orchestrator | 2025-07-04 18:08:00.594765 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-07-04 18:08:00.594775 | orchestrator | Friday 04 July 2025 18:04:56 +0000 (0:00:00.784) 0:00:18.061 *********** 2025-07-04 18:08:00.594784 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:08:00.594794 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:08:00.594803 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:08:00.594813 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:08:00.594822 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:08:00.594832 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:08:00.594841 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:08:00.594851 | orchestrator | 2025-07-04 18:08:00.594861 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-07-04 18:08:00.594870 | orchestrator | Friday 04 July 2025 18:04:57 +0000 (0:00:01.127) 0:00:19.188 *********** 2025-07-04 18:08:00.594886 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.594897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.594912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.594923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.594996 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595014 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.595024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595052 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.595067 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.595117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595162 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595287 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595311 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.595328 | orchestrator | 2025-07-04 18:08:00.595336 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-07-04 18:08:00.595344 | orchestrator | Friday 04 July 2025 18:05:03 +0000 (0:00:06.055) 0:00:25.244 *********** 2025-07-04 18:08:00.595352 | orchestrator | [WARNING]: Skipped 2025-07-04 18:08:00.595360 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-07-04 18:08:00.595369 | orchestrator | to this access issue: 2025-07-04 18:08:00.595377 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-07-04 18:08:00.595384 | orchestrator | directory 2025-07-04 18:08:00.595392 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-04 18:08:00.595400 | orchestrator | 2025-07-04 18:08:00.595408 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-07-04 18:08:00.595416 | orchestrator | Friday 04 July 2025 18:05:06 +0000 (0:00:03.002) 0:00:28.246 *********** 2025-07-04 18:08:00.595424 | orchestrator | [WARNING]: Skipped 2025-07-04 18:08:00.595431 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-07-04 18:08:00.595439 | orchestrator | to this access issue: 2025-07-04 18:08:00.595447 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-07-04 18:08:00.595455 | orchestrator | directory 2025-07-04 18:08:00.595463 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-04 18:08:00.595470 | orchestrator | 2025-07-04 18:08:00.595478 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-07-04 18:08:00.595486 | orchestrator | Friday 04 July 2025 18:05:07 +0000 (0:00:01.529) 0:00:29.776 *********** 2025-07-04 18:08:00.595494 | orchestrator | [WARNING]: Skipped 2025-07-04 18:08:00.595501 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-07-04 18:08:00.595509 | orchestrator | to this access issue: 2025-07-04 18:08:00.595517 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-07-04 18:08:00.595525 | orchestrator | directory 2025-07-04 18:08:00.595532 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-04 18:08:00.595541 | orchestrator | 2025-07-04 18:08:00.595554 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-07-04 18:08:00.595562 | orchestrator | Friday 04 July 2025 18:05:09 +0000 (0:00:01.170) 0:00:30.946 *********** 2025-07-04 18:08:00.595570 | orchestrator | [WARNING]: Skipped 2025-07-04 18:08:00.595578 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-07-04 18:08:00.595586 | orchestrator | to this access issue: 2025-07-04 18:08:00.595594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-07-04 18:08:00.595602 | orchestrator | directory 2025-07-04 18:08:00.595609 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-04 18:08:00.595617 | orchestrator | 2025-07-04 18:08:00.595625 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-07-04 18:08:00.595633 | orchestrator | Friday 04 July 2025 18:05:10 +0000 (0:00:00.944) 0:00:31.891 *********** 2025-07-04 18:08:00.595648 | orchestrator | changed: [testbed-manager] 2025-07-04 18:08:00.595656 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:08:00.595664 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:08:00.595671 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:08:00.595679 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:08:00.595687 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:08:00.595695 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:08:00.595747 | orchestrator | 2025-07-04 18:08:00.595763 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-07-04 18:08:00.595771 | orchestrator | Friday 04 July 2025 18:05:15 +0000 (0:00:05.750) 0:00:37.642 *********** 2025-07-04 18:08:00.595779 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-04 18:08:00.595787 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-04 18:08:00.595795 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-04 18:08:00.595803 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-04 18:08:00.595811 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-04 18:08:00.595819 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-04 18:08:00.595827 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-04 18:08:00.595835 | orchestrator | 2025-07-04 18:08:00.595843 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-07-04 18:08:00.595850 | orchestrator | Friday 04 July 2025 18:05:19 +0000 (0:00:03.497) 0:00:41.139 *********** 2025-07-04 18:08:00.595858 | orchestrator | changed: [testbed-manager] 2025-07-04 18:08:00.595866 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:08:00.595874 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:08:00.595882 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:08:00.595890 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:08:00.595897 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:08:00.595905 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:08:00.595913 | orchestrator | 2025-07-04 18:08:00.595920 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-07-04 18:08:00.595950 | orchestrator | Friday 04 July 2025 18:05:22 +0000 (0:00:03.174) 0:00:44.314 *********** 2025-07-04 18:08:00.595959 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.595968 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.595976 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.595996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.596006 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596025 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596034 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.596043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.596051 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.596060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.596078 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.596091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.596104 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596112 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596121 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596129 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.596138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.596151 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.596168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:08:00.596177 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596188 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596197 | orchestrator | 2025-07-04 18:08:00.596205 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-07-04 18:08:00.596213 | orchestrator | Friday 04 July 2025 18:05:24 +0000 (0:00:02.263) 0:00:46.578 *********** 2025-07-04 18:08:00.596221 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-04 18:08:00.596229 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-04 18:08:00.596237 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-04 18:08:00.596245 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-04 18:08:00.596253 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-04 18:08:00.596260 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-04 18:08:00.596269 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-04 18:08:00.596276 | orchestrator | 2025-07-04 18:08:00.596284 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-07-04 18:08:00.596316 | orchestrator | Friday 04 July 2025 18:05:27 +0000 (0:00:03.017) 0:00:49.595 *********** 2025-07-04 18:08:00.596330 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-04 18:08:00.596343 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-04 18:08:00.596356 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-04 18:08:00.596369 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-04 18:08:00.596382 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-04 18:08:00.596407 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-04 18:08:00.596421 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-04 18:08:00.596435 | orchestrator | 2025-07-04 18:08:00.596448 | orchestrator | TASK [common : Check common containers] **************************************** 2025-07-04 18:08:00.596462 | orchestrator | Friday 04 July 2025 18:05:30 +0000 (0:00:02.662) 0:00:52.258 *********** 2025-07-04 18:08:00.596471 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.596480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.596496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.596510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.596519 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.596528 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.596573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-04 18:08:00.596594 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596603 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596633 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596647 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596668 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596685 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:08:00.596699 | orchestrator | 2025-07-04 18:08:00.596707 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-07-04 18:08:00.596715 | orchestrator | Friday 04 July 2025 18:05:33 +0000 (0:00:02.707) 0:00:54.966 *********** 2025-07-04 18:08:00.596723 | orchestrator | changed: [testbed-manager] 2025-07-04 18:08:00.596731 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:08:00.596739 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:08:00.596747 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:08:00.596754 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:08:00.596762 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:08:00.596770 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:08:00.596778 | orchestrator | 2025-07-04 18:08:00.596786 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-07-04 18:08:00.596794 | orchestrator | Friday 04 July 2025 18:05:34 +0000 (0:00:01.399) 0:00:56.365 *********** 2025-07-04 18:08:00.596802 | orchestrator | changed: [testbed-manager] 2025-07-04 18:08:00.596809 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:08:00.596817 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:08:00.596825 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:08:00.596833 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:08:00.596841 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:08:00.596849 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:08:00.596857 | orchestrator | 2025-07-04 18:08:00.596864 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-04 18:08:00.596872 | orchestrator | Friday 04 July 2025 18:05:35 +0000 (0:00:01.106) 0:00:57.471 *********** 2025-07-04 18:08:00.596880 | orchestrator | 2025-07-04 18:08:00.596888 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-04 18:08:00.596896 | orchestrator | Friday 04 July 2025 18:05:35 +0000 (0:00:00.207) 0:00:57.679 *********** 2025-07-04 18:08:00.596904 | orchestrator | 2025-07-04 18:08:00.596912 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-04 18:08:00.596920 | orchestrator | Friday 04 July 2025 18:05:35 +0000 (0:00:00.072) 0:00:57.751 *********** 2025-07-04 18:08:00.596954 | orchestrator | 2025-07-04 18:08:00.596962 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-04 18:08:00.596970 | orchestrator | Friday 04 July 2025 18:05:35 +0000 (0:00:00.067) 0:00:57.819 *********** 2025-07-04 18:08:00.596978 | orchestrator | 2025-07-04 18:08:00.596985 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-04 18:08:00.596993 | orchestrator | Friday 04 July 2025 18:05:36 +0000 (0:00:00.068) 0:00:57.887 *********** 2025-07-04 18:08:00.597001 | orchestrator | 2025-07-04 18:08:00.597008 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-04 18:08:00.597016 | orchestrator | Friday 04 July 2025 18:05:36 +0000 (0:00:00.093) 0:00:57.980 *********** 2025-07-04 18:08:00.597120 | orchestrator | 2025-07-04 18:08:00.597131 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-04 18:08:00.597139 | orchestrator | Friday 04 July 2025 18:05:36 +0000 (0:00:00.064) 0:00:58.045 *********** 2025-07-04 18:08:00.597146 | orchestrator | 2025-07-04 18:08:00.597154 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-07-04 18:08:00.597162 | orchestrator | Friday 04 July 2025 18:05:36 +0000 (0:00:00.110) 0:00:58.155 *********** 2025-07-04 18:08:00.597187 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:08:00.597207 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:08:00.597220 | orchestrator | changed: [testbed-manager] 2025-07-04 18:08:00.597233 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:08:00.597245 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:08:00.597258 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:08:00.597270 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:08:00.597281 | orchestrator | 2025-07-04 18:08:00.597295 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-07-04 18:08:00.597307 | orchestrator | Friday 04 July 2025 18:06:18 +0000 (0:00:42.148) 0:01:40.304 *********** 2025-07-04 18:08:00.597332 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:08:00.597345 | orchestrator | changed: [testbed-manager] 2025-07-04 18:08:00.597360 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:08:00.597375 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:08:00.597389 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:08:00.597403 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:08:00.597411 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:08:00.597419 | orchestrator | 2025-07-04 18:08:00.597427 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-07-04 18:08:00.597435 | orchestrator | Friday 04 July 2025 18:07:48 +0000 (0:01:30.479) 0:03:10.783 *********** 2025-07-04 18:08:00.597443 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:08:00.597459 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:08:00.597467 | orchestrator | ok: [testbed-manager] 2025-07-04 18:08:00.597475 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:08:00.597483 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:08:00.597491 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:08:00.597498 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:08:00.597506 | orchestrator | 2025-07-04 18:08:00.597514 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-07-04 18:08:00.597522 | orchestrator | Friday 04 July 2025 18:07:51 +0000 (0:00:02.977) 0:03:13.761 *********** 2025-07-04 18:08:00.597530 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:08:00.597538 | orchestrator | changed: [testbed-manager] 2025-07-04 18:08:00.597546 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:08:00.597553 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:08:00.597561 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:08:00.597569 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:08:00.597577 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:08:00.597585 | orchestrator | 2025-07-04 18:08:00.597593 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:08:00.597601 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-04 18:08:00.597610 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-04 18:08:00.597618 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-04 18:08:00.597626 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-04 18:08:00.597634 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-04 18:08:00.597642 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-04 18:08:00.597649 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-04 18:08:00.597657 | orchestrator | 2025-07-04 18:08:00.597665 | orchestrator | 2025-07-04 18:08:00.597673 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:08:00.597681 | orchestrator | Friday 04 July 2025 18:07:57 +0000 (0:00:05.679) 0:03:19.440 *********** 2025-07-04 18:08:00.597689 | orchestrator | =============================================================================== 2025-07-04 18:08:00.597696 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 90.48s 2025-07-04 18:08:00.597704 | orchestrator | common : Restart fluentd container ------------------------------------- 42.15s 2025-07-04 18:08:00.597712 | orchestrator | common : Copying over config.json files for services -------------------- 6.06s 2025-07-04 18:08:00.597725 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.75s 2025-07-04 18:08:00.597733 | orchestrator | common : Restart cron container ----------------------------------------- 5.68s 2025-07-04 18:08:00.597742 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.15s 2025-07-04 18:08:00.597749 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.08s 2025-07-04 18:08:00.597757 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.50s 2025-07-04 18:08:00.597765 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.18s 2025-07-04 18:08:00.597773 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.02s 2025-07-04 18:08:00.597781 | orchestrator | common : Find custom fluentd input config files ------------------------- 3.00s 2025-07-04 18:08:00.597788 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.98s 2025-07-04 18:08:00.597796 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.73s 2025-07-04 18:08:00.597804 | orchestrator | common : Check common containers ---------------------------------------- 2.71s 2025-07-04 18:08:00.597819 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.66s 2025-07-04 18:08:00.597827 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.26s 2025-07-04 18:08:00.597835 | orchestrator | common : include_tasks -------------------------------------------------- 1.59s 2025-07-04 18:08:00.597842 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.53s 2025-07-04 18:08:00.597850 | orchestrator | common : Creating log volume -------------------------------------------- 1.40s 2025-07-04 18:08:00.597858 | orchestrator | common : include_tasks -------------------------------------------------- 1.20s 2025-07-04 18:08:00.598811 | orchestrator | 2025-07-04 18:08:00 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:00.600219 | orchestrator | 2025-07-04 18:08:00 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:00.600249 | orchestrator | 2025-07-04 18:08:00 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:03.644660 | orchestrator | 2025-07-04 18:08:03 | INFO  | Task b3b1dbd7-3e23-4d6d-9f64-a8a7811fc683 is in state STARTED 2025-07-04 18:08:03.645463 | orchestrator | 2025-07-04 18:08:03 | INFO  | Task b1d5955a-189a-43a1-a4f7-3ba9c16fa60d is in state STARTED 2025-07-04 18:08:03.646590 | orchestrator | 2025-07-04 18:08:03 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:03.652576 | orchestrator | 2025-07-04 18:08:03 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:03.653639 | orchestrator | 2025-07-04 18:08:03 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:03.655299 | orchestrator | 2025-07-04 18:08:03 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:03.655344 | orchestrator | 2025-07-04 18:08:03 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:06.712509 | orchestrator | 2025-07-04 18:08:06 | INFO  | Task b3b1dbd7-3e23-4d6d-9f64-a8a7811fc683 is in state STARTED 2025-07-04 18:08:06.712612 | orchestrator | 2025-07-04 18:08:06 | INFO  | Task b1d5955a-189a-43a1-a4f7-3ba9c16fa60d is in state STARTED 2025-07-04 18:08:06.712628 | orchestrator | 2025-07-04 18:08:06 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:06.712640 | orchestrator | 2025-07-04 18:08:06 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:06.712652 | orchestrator | 2025-07-04 18:08:06 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:06.715238 | orchestrator | 2025-07-04 18:08:06 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:06.715319 | orchestrator | 2025-07-04 18:08:06 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:09.769493 | orchestrator | 2025-07-04 18:08:09 | INFO  | Task b3b1dbd7-3e23-4d6d-9f64-a8a7811fc683 is in state STARTED 2025-07-04 18:08:09.770705 | orchestrator | 2025-07-04 18:08:09 | INFO  | Task b1d5955a-189a-43a1-a4f7-3ba9c16fa60d is in state STARTED 2025-07-04 18:08:09.771834 | orchestrator | 2025-07-04 18:08:09 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:09.773192 | orchestrator | 2025-07-04 18:08:09 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:09.775160 | orchestrator | 2025-07-04 18:08:09 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:09.776319 | orchestrator | 2025-07-04 18:08:09 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:09.776349 | orchestrator | 2025-07-04 18:08:09 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:12.820002 | orchestrator | 2025-07-04 18:08:12 | INFO  | Task b3b1dbd7-3e23-4d6d-9f64-a8a7811fc683 is in state STARTED 2025-07-04 18:08:12.821618 | orchestrator | 2025-07-04 18:08:12 | INFO  | Task b1d5955a-189a-43a1-a4f7-3ba9c16fa60d is in state STARTED 2025-07-04 18:08:12.824241 | orchestrator | 2025-07-04 18:08:12 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:12.825227 | orchestrator | 2025-07-04 18:08:12 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:12.827096 | orchestrator | 2025-07-04 18:08:12 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:12.828949 | orchestrator | 2025-07-04 18:08:12 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:12.829176 | orchestrator | 2025-07-04 18:08:12 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:15.866306 | orchestrator | 2025-07-04 18:08:15 | INFO  | Task b3b1dbd7-3e23-4d6d-9f64-a8a7811fc683 is in state STARTED 2025-07-04 18:08:15.867814 | orchestrator | 2025-07-04 18:08:15 | INFO  | Task b1d5955a-189a-43a1-a4f7-3ba9c16fa60d is in state STARTED 2025-07-04 18:08:15.869576 | orchestrator | 2025-07-04 18:08:15 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:15.872089 | orchestrator | 2025-07-04 18:08:15 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:15.873191 | orchestrator | 2025-07-04 18:08:15 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:15.874725 | orchestrator | 2025-07-04 18:08:15 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:15.874985 | orchestrator | 2025-07-04 18:08:15 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:18.923097 | orchestrator | 2025-07-04 18:08:18 | INFO  | Task b3b1dbd7-3e23-4d6d-9f64-a8a7811fc683 is in state SUCCESS 2025-07-04 18:08:18.924517 | orchestrator | 2025-07-04 18:08:18 | INFO  | Task b1d5955a-189a-43a1-a4f7-3ba9c16fa60d is in state STARTED 2025-07-04 18:08:18.927784 | orchestrator | 2025-07-04 18:08:18 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:18.930219 | orchestrator | 2025-07-04 18:08:18 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:18.933371 | orchestrator | 2025-07-04 18:08:18 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:18.936403 | orchestrator | 2025-07-04 18:08:18 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:18.936807 | orchestrator | 2025-07-04 18:08:18 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:21.980398 | orchestrator | 2025-07-04 18:08:21 | INFO  | Task b1d5955a-189a-43a1-a4f7-3ba9c16fa60d is in state STARTED 2025-07-04 18:08:21.981182 | orchestrator | 2025-07-04 18:08:21 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:21.983046 | orchestrator | 2025-07-04 18:08:21 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:08:21.985180 | orchestrator | 2025-07-04 18:08:21 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:21.986244 | orchestrator | 2025-07-04 18:08:21 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:21.987542 | orchestrator | 2025-07-04 18:08:21 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:21.987719 | orchestrator | 2025-07-04 18:08:21 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:25.050394 | orchestrator | 2025-07-04 18:08:25 | INFO  | Task b1d5955a-189a-43a1-a4f7-3ba9c16fa60d is in state STARTED 2025-07-04 18:08:25.052321 | orchestrator | 2025-07-04 18:08:25 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:25.053585 | orchestrator | 2025-07-04 18:08:25 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:08:25.054461 | orchestrator | 2025-07-04 18:08:25 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:25.055708 | orchestrator | 2025-07-04 18:08:25 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:25.056703 | orchestrator | 2025-07-04 18:08:25 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:25.057112 | orchestrator | 2025-07-04 18:08:25 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:28.104994 | orchestrator | 2025-07-04 18:08:28 | INFO  | Task b1d5955a-189a-43a1-a4f7-3ba9c16fa60d is in state STARTED 2025-07-04 18:08:28.106129 | orchestrator | 2025-07-04 18:08:28 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:28.107883 | orchestrator | 2025-07-04 18:08:28 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:08:28.109445 | orchestrator | 2025-07-04 18:08:28 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:28.110853 | orchestrator | 2025-07-04 18:08:28 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:28.114540 | orchestrator | 2025-07-04 18:08:28 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:28.115647 | orchestrator | 2025-07-04 18:08:28 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:31.161516 | orchestrator | 2025-07-04 18:08:31 | INFO  | Task b1d5955a-189a-43a1-a4f7-3ba9c16fa60d is in state SUCCESS 2025-07-04 18:08:31.162459 | orchestrator | 2025-07-04 18:08:31.162507 | orchestrator | 2025-07-04 18:08:31.162536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:08:31.162559 | orchestrator | 2025-07-04 18:08:31.162575 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:08:31.162586 | orchestrator | Friday 04 July 2025 18:08:03 +0000 (0:00:00.279) 0:00:00.279 *********** 2025-07-04 18:08:31.162598 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:08:31.162611 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:08:31.162622 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:08:31.162633 | orchestrator | 2025-07-04 18:08:31.162643 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:08:31.162695 | orchestrator | Friday 04 July 2025 18:08:03 +0000 (0:00:00.356) 0:00:00.636 *********** 2025-07-04 18:08:31.162709 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-07-04 18:08:31.162720 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-07-04 18:08:31.162731 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-07-04 18:08:31.162742 | orchestrator | 2025-07-04 18:08:31.162753 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-07-04 18:08:31.162763 | orchestrator | 2025-07-04 18:08:31.162788 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-07-04 18:08:31.162799 | orchestrator | Friday 04 July 2025 18:08:04 +0000 (0:00:00.664) 0:00:01.300 *********** 2025-07-04 18:08:31.162810 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:08:31.162822 | orchestrator | 2025-07-04 18:08:31.162833 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-07-04 18:08:31.162843 | orchestrator | Friday 04 July 2025 18:08:05 +0000 (0:00:00.755) 0:00:02.055 *********** 2025-07-04 18:08:31.162854 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-04 18:08:31.162866 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-04 18:08:31.162876 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-04 18:08:31.162887 | orchestrator | 2025-07-04 18:08:31.162949 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-07-04 18:08:31.162963 | orchestrator | Friday 04 July 2025 18:08:06 +0000 (0:00:01.174) 0:00:03.230 *********** 2025-07-04 18:08:31.162983 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-04 18:08:31.163003 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-04 18:08:31.163019 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-04 18:08:31.163032 | orchestrator | 2025-07-04 18:08:31.163045 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-07-04 18:08:31.163057 | orchestrator | Friday 04 July 2025 18:08:08 +0000 (0:00:02.693) 0:00:05.924 *********** 2025-07-04 18:08:31.163069 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:08:31.163082 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:08:31.163094 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:08:31.163106 | orchestrator | 2025-07-04 18:08:31.163119 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-07-04 18:08:31.163131 | orchestrator | Friday 04 July 2025 18:08:10 +0000 (0:00:02.047) 0:00:07.971 *********** 2025-07-04 18:08:31.163143 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:08:31.163156 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:08:31.163169 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:08:31.163181 | orchestrator | 2025-07-04 18:08:31.163193 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:08:31.163206 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:08:31.163220 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:08:31.163247 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:08:31.163259 | orchestrator | 2025-07-04 18:08:31.163272 | orchestrator | 2025-07-04 18:08:31.163285 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:08:31.163298 | orchestrator | Friday 04 July 2025 18:08:18 +0000 (0:00:07.212) 0:00:15.183 *********** 2025-07-04 18:08:31.163309 | orchestrator | =============================================================================== 2025-07-04 18:08:31.163319 | orchestrator | memcached : Restart memcached container --------------------------------- 7.21s 2025-07-04 18:08:31.163339 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.69s 2025-07-04 18:08:31.163350 | orchestrator | memcached : Check memcached container ----------------------------------- 2.05s 2025-07-04 18:08:31.163361 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.17s 2025-07-04 18:08:31.163407 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.76s 2025-07-04 18:08:31.163419 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2025-07-04 18:08:31.163430 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-07-04 18:08:31.163441 | orchestrator | 2025-07-04 18:08:31.163452 | orchestrator | 2025-07-04 18:08:31.163463 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:08:31.163473 | orchestrator | 2025-07-04 18:08:31.163484 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:08:31.163495 | orchestrator | Friday 04 July 2025 18:08:03 +0000 (0:00:00.402) 0:00:00.403 *********** 2025-07-04 18:08:31.163505 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:08:31.163516 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:08:31.163527 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:08:31.163538 | orchestrator | 2025-07-04 18:08:31.163548 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:08:31.163588 | orchestrator | Friday 04 July 2025 18:08:04 +0000 (0:00:00.385) 0:00:00.788 *********** 2025-07-04 18:08:31.163599 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-07-04 18:08:31.163611 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-07-04 18:08:31.163638 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-07-04 18:08:31.163649 | orchestrator | 2025-07-04 18:08:31.163660 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-07-04 18:08:31.163671 | orchestrator | 2025-07-04 18:08:31.163693 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-07-04 18:08:31.163704 | orchestrator | Friday 04 July 2025 18:08:04 +0000 (0:00:00.546) 0:00:01.335 *********** 2025-07-04 18:08:31.163715 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:08:31.163726 | orchestrator | 2025-07-04 18:08:31.163737 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-07-04 18:08:31.163747 | orchestrator | Friday 04 July 2025 18:08:05 +0000 (0:00:00.735) 0:00:02.070 *********** 2025-07-04 18:08:31.163761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.163788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.163800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.163820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.163832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.163854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.163866 | orchestrator | 2025-07-04 18:08:31.163877 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-07-04 18:08:31.163888 | orchestrator | Friday 04 July 2025 18:08:07 +0000 (0:00:01.876) 0:00:03.946 *********** 2025-07-04 18:08:31.163945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.163960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.163971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.163992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164036 | orchestrator | 2025-07-04 18:08:31.164047 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-07-04 18:08:31.164058 | orchestrator | Friday 04 July 2025 18:08:10 +0000 (0:00:03.317) 0:00:07.264 *********** 2025-07-04 18:08:31.164093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164177 | orchestrator | 2025-07-04 18:08:31.164188 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-07-04 18:08:31.164199 | orchestrator | Friday 04 July 2025 18:08:13 +0000 (0:00:03.258) 0:00:10.523 *********** 2025-07-04 18:08:31.164215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-04 18:08:31.164315 | orchestrator | 2025-07-04 18:08:31.164326 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-04 18:08:31.164337 | orchestrator | Friday 04 July 2025 18:08:15 +0000 (0:00:01.842) 0:00:12.365 *********** 2025-07-04 18:08:31.164348 | orchestrator | 2025-07-04 18:08:31.164359 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-04 18:08:31.164370 | orchestrator | Friday 04 July 2025 18:08:15 +0000 (0:00:00.069) 0:00:12.434 *********** 2025-07-04 18:08:31.164380 | orchestrator | 2025-07-04 18:08:31.164391 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-04 18:08:31.164402 | orchestrator | Friday 04 July 2025 18:08:15 +0000 (0:00:00.072) 0:00:12.507 *********** 2025-07-04 18:08:31.164412 | orchestrator | 2025-07-04 18:08:31.164423 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-07-04 18:08:31.164433 | orchestrator | Friday 04 July 2025 18:08:15 +0000 (0:00:00.080) 0:00:12.587 *********** 2025-07-04 18:08:31.164444 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:08:31.164460 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:08:31.164471 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:08:31.164490 | orchestrator | 2025-07-04 18:08:31.164501 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-07-04 18:08:31.164511 | orchestrator | Friday 04 July 2025 18:08:20 +0000 (0:00:04.567) 0:00:17.155 *********** 2025-07-04 18:08:31.164522 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:08:31.164566 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:08:31.164578 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:08:31.164588 | orchestrator | 2025-07-04 18:08:31.164599 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:08:31.164610 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:08:31.164621 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:08:31.164632 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:08:31.164643 | orchestrator | 2025-07-04 18:08:31.164658 | orchestrator | 2025-07-04 18:08:31.164669 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:08:31.164680 | orchestrator | Friday 04 July 2025 18:08:30 +0000 (0:00:09.970) 0:00:27.125 *********** 2025-07-04 18:08:31.164691 | orchestrator | =============================================================================== 2025-07-04 18:08:31.164762 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.97s 2025-07-04 18:08:31.164774 | orchestrator | redis : Restart redis container ----------------------------------------- 4.57s 2025-07-04 18:08:31.164784 | orchestrator | redis : Copying over default config.json files -------------------------- 3.32s 2025-07-04 18:08:31.164794 | orchestrator | redis : Copying over redis config files --------------------------------- 3.26s 2025-07-04 18:08:31.164805 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.88s 2025-07-04 18:08:31.164816 | orchestrator | redis : Check redis containers ------------------------------------------ 1.84s 2025-07-04 18:08:31.164826 | orchestrator | redis : include_tasks --------------------------------------------------- 0.74s 2025-07-04 18:08:31.164836 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-07-04 18:08:31.164847 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-07-04 18:08:31.164857 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2025-07-04 18:08:31.165043 | orchestrator | 2025-07-04 18:08:31 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:31.166839 | orchestrator | 2025-07-04 18:08:31 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:08:31.168350 | orchestrator | 2025-07-04 18:08:31 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:31.169962 | orchestrator | 2025-07-04 18:08:31 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:31.171622 | orchestrator | 2025-07-04 18:08:31 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:31.171819 | orchestrator | 2025-07-04 18:08:31 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:34.232462 | orchestrator | 2025-07-04 18:08:34 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:34.233746 | orchestrator | 2025-07-04 18:08:34 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:08:34.235199 | orchestrator | 2025-07-04 18:08:34 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:34.236068 | orchestrator | 2025-07-04 18:08:34 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:34.237305 | orchestrator | 2025-07-04 18:08:34 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:34.237395 | orchestrator | 2025-07-04 18:08:34 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:37.269142 | orchestrator | 2025-07-04 18:08:37 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:37.271072 | orchestrator | 2025-07-04 18:08:37 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:08:37.272870 | orchestrator | 2025-07-04 18:08:37 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:37.275123 | orchestrator | 2025-07-04 18:08:37 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:37.277394 | orchestrator | 2025-07-04 18:08:37 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:37.277428 | orchestrator | 2025-07-04 18:08:37 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:40.327101 | orchestrator | 2025-07-04 18:08:40 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:40.327220 | orchestrator | 2025-07-04 18:08:40 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:08:40.327230 | orchestrator | 2025-07-04 18:08:40 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:40.327266 | orchestrator | 2025-07-04 18:08:40 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:40.328322 | orchestrator | 2025-07-04 18:08:40 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:40.328332 | orchestrator | 2025-07-04 18:08:40 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:43.379975 | orchestrator | 2025-07-04 18:08:43 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:43.381918 | orchestrator | 2025-07-04 18:08:43 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:08:43.382711 | orchestrator | 2025-07-04 18:08:43 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:43.384054 | orchestrator | 2025-07-04 18:08:43 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:43.386359 | orchestrator | 2025-07-04 18:08:43 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:43.386470 | orchestrator | 2025-07-04 18:08:43 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:46.421769 | orchestrator | 2025-07-04 18:08:46 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:46.422979 | orchestrator | 2025-07-04 18:08:46 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:08:46.427636 | orchestrator | 2025-07-04 18:08:46 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:46.428695 | orchestrator | 2025-07-04 18:08:46 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:46.429951 | orchestrator | 2025-07-04 18:08:46 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:46.433401 | orchestrator | 2025-07-04 18:08:46 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:49.468194 | orchestrator | 2025-07-04 18:08:49 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:49.468260 | orchestrator | 2025-07-04 18:08:49 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:08:49.468270 | orchestrator | 2025-07-04 18:08:49 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:49.468296 | orchestrator | 2025-07-04 18:08:49 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:49.468305 | orchestrator | 2025-07-04 18:08:49 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:49.468313 | orchestrator | 2025-07-04 18:08:49 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:52.493675 | orchestrator | 2025-07-04 18:08:52 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:52.495209 | orchestrator | 2025-07-04 18:08:52 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:08:52.496419 | orchestrator | 2025-07-04 18:08:52 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:52.497967 | orchestrator | 2025-07-04 18:08:52 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:52.498746 | orchestrator | 2025-07-04 18:08:52 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:52.498779 | orchestrator | 2025-07-04 18:08:52 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:55.556715 | orchestrator | 2025-07-04 18:08:55 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:55.559671 | orchestrator | 2025-07-04 18:08:55 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:08:55.562643 | orchestrator | 2025-07-04 18:08:55 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:55.566364 | orchestrator | 2025-07-04 18:08:55 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:55.570626 | orchestrator | 2025-07-04 18:08:55 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:55.570707 | orchestrator | 2025-07-04 18:08:55 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:08:58.612066 | orchestrator | 2025-07-04 18:08:58 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:08:58.612252 | orchestrator | 2025-07-04 18:08:58 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:08:58.613115 | orchestrator | 2025-07-04 18:08:58 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:08:58.615317 | orchestrator | 2025-07-04 18:08:58 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:08:58.618098 | orchestrator | 2025-07-04 18:08:58 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:08:58.618145 | orchestrator | 2025-07-04 18:08:58 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:01.651808 | orchestrator | 2025-07-04 18:09:01 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:01.655212 | orchestrator | 2025-07-04 18:09:01 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:01.655675 | orchestrator | 2025-07-04 18:09:01 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:09:01.659686 | orchestrator | 2025-07-04 18:09:01 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:09:01.662917 | orchestrator | 2025-07-04 18:09:01 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:01.662980 | orchestrator | 2025-07-04 18:09:01 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:04.709055 | orchestrator | 2025-07-04 18:09:04 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:04.710499 | orchestrator | 2025-07-04 18:09:04 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:04.712113 | orchestrator | 2025-07-04 18:09:04 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:09:04.714745 | orchestrator | 2025-07-04 18:09:04 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:09:04.715473 | orchestrator | 2025-07-04 18:09:04 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:04.715527 | orchestrator | 2025-07-04 18:09:04 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:07.766956 | orchestrator | 2025-07-04 18:09:07 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:07.772297 | orchestrator | 2025-07-04 18:09:07 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:07.778062 | orchestrator | 2025-07-04 18:09:07 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:09:07.781653 | orchestrator | 2025-07-04 18:09:07 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:09:07.783745 | orchestrator | 2025-07-04 18:09:07 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:07.783787 | orchestrator | 2025-07-04 18:09:07 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:10.848647 | orchestrator | 2025-07-04 18:09:10 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:10.855895 | orchestrator | 2025-07-04 18:09:10 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:10.857209 | orchestrator | 2025-07-04 18:09:10 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:09:10.860679 | orchestrator | 2025-07-04 18:09:10 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state STARTED 2025-07-04 18:09:10.863849 | orchestrator | 2025-07-04 18:09:10 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:10.864026 | orchestrator | 2025-07-04 18:09:10 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:13.912742 | orchestrator | 2025-07-04 18:09:13 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:13.913614 | orchestrator | 2025-07-04 18:09:13 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:13.921417 | orchestrator | 2025-07-04 18:09:13 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:09:13.924582 | orchestrator | 2025-07-04 18:09:13 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:13.927998 | orchestrator | 2025-07-04 18:09:13 | INFO  | Task 1276c6d3-b9c2-4286-96b7-01fb361f2efa is in state SUCCESS 2025-07-04 18:09:13.929885 | orchestrator | 2025-07-04 18:09:13.929939 | orchestrator | 2025-07-04 18:09:13.929981 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:09:13.930002 | orchestrator | 2025-07-04 18:09:13.930088 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:09:13.930105 | orchestrator | Friday 04 July 2025 18:08:04 +0000 (0:00:00.304) 0:00:00.304 *********** 2025-07-04 18:09:13.930116 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:13.930129 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:13.930140 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:13.930150 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:09:13.930161 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:09:13.930171 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:09:13.930182 | orchestrator | 2025-07-04 18:09:13.930220 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:09:13.930232 | orchestrator | Friday 04 July 2025 18:08:06 +0000 (0:00:01.361) 0:00:01.666 *********** 2025-07-04 18:09:13.930244 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-04 18:09:13.930255 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-04 18:09:13.930266 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-04 18:09:13.930276 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-04 18:09:13.930286 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-04 18:09:13.930297 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-04 18:09:13.930308 | orchestrator | 2025-07-04 18:09:13.930318 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-07-04 18:09:13.930329 | orchestrator | 2025-07-04 18:09:13.930340 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-07-04 18:09:13.930350 | orchestrator | Friday 04 July 2025 18:08:07 +0000 (0:00:00.972) 0:00:02.638 *********** 2025-07-04 18:09:13.930362 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:09:13.930374 | orchestrator | 2025-07-04 18:09:13.930385 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-04 18:09:13.930396 | orchestrator | Friday 04 July 2025 18:08:08 +0000 (0:00:01.780) 0:00:04.418 *********** 2025-07-04 18:09:13.930421 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-04 18:09:13.930433 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-04 18:09:13.930444 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-04 18:09:13.930456 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-04 18:09:13.930469 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-04 18:09:13.930481 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-04 18:09:13.930494 | orchestrator | 2025-07-04 18:09:13.930506 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-04 18:09:13.930519 | orchestrator | Friday 04 July 2025 18:08:10 +0000 (0:00:01.923) 0:00:06.342 *********** 2025-07-04 18:09:13.930626 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-04 18:09:13.930640 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-04 18:09:13.930652 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-04 18:09:13.930665 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-04 18:09:13.930677 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-04 18:09:13.930689 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-04 18:09:13.930701 | orchestrator | 2025-07-04 18:09:13.930712 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-04 18:09:13.930724 | orchestrator | Friday 04 July 2025 18:08:12 +0000 (0:00:01.835) 0:00:08.178 *********** 2025-07-04 18:09:13.930737 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-07-04 18:09:13.930749 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:13.930762 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-07-04 18:09:13.930775 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:13.930786 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-07-04 18:09:13.930799 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:13.930812 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-07-04 18:09:13.930822 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:13.930833 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-07-04 18:09:13.930843 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:13.930890 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-07-04 18:09:13.930902 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:13.930913 | orchestrator | 2025-07-04 18:09:13.930923 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-07-04 18:09:13.930934 | orchestrator | Friday 04 July 2025 18:08:14 +0000 (0:00:01.378) 0:00:09.556 *********** 2025-07-04 18:09:13.930944 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:13.930957 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:13.930975 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:13.930994 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:13.931011 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:13.931029 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:13.931046 | orchestrator | 2025-07-04 18:09:13.931061 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-07-04 18:09:13.931077 | orchestrator | Friday 04 July 2025 18:08:15 +0000 (0:00:00.950) 0:00:10.506 *********** 2025-07-04 18:09:13.931133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931195 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931211 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931315 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931458 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931475 | orchestrator | 2025-07-04 18:09:13.931493 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-07-04 18:09:13.931510 | orchestrator | Friday 04 July 2025 18:08:16 +0000 (0:00:01.747) 0:00:12.254 *********** 2025-07-04 18:09:13.931529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931576 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931718 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.931820 | orchestrator | 2025-07-04 18:09:13.931837 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-07-04 18:09:13.931895 | orchestrator | Friday 04 July 2025 18:08:19 +0000 (0:00:03.042) 0:00:15.296 *********** 2025-07-04 18:09:13.931916 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:13.931937 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:13.931959 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:13.931978 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:13.931997 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:13.932015 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:13.932034 | orchestrator | 2025-07-04 18:09:13.932055 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-07-04 18:09:13.932074 | orchestrator | Friday 04 July 2025 18:08:21 +0000 (0:00:01.416) 0:00:16.713 *********** 2025-07-04 18:09:13.932094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.932129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.932150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.932202 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.932224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.932246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.932287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.932308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.932326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.932364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-04 18:09:13.932385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.932405 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-04 18:09:13.932435 | orchestrator | 2025-07-04 18:09:13.932455 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-04 18:09:13.932476 | orchestrator | Friday 04 July 2025 18:08:24 +0000 (0:00:03.111) 0:00:19.825 *********** 2025-07-04 18:09:13.932497 | orchestrator | 2025-07-04 18:09:13.932516 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-04 18:09:13.932535 | orchestrator | Friday 04 July 2025 18:08:24 +0000 (0:00:00.142) 0:00:19.967 *********** 2025-07-04 18:09:13.932554 | orchestrator | 2025-07-04 18:09:13.932573 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-04 18:09:13.932592 | orchestrator | Friday 04 July 2025 18:08:24 +0000 (0:00:00.133) 0:00:20.100 *********** 2025-07-04 18:09:13.932611 | orchestrator | 2025-07-04 18:09:13.932630 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-04 18:09:13.932649 | orchestrator | Friday 04 July 2025 18:08:24 +0000 (0:00:00.130) 0:00:20.231 *********** 2025-07-04 18:09:13.932666 | orchestrator | 2025-07-04 18:09:13.932685 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-04 18:09:13.932702 | orchestrator | Friday 04 July 2025 18:08:24 +0000 (0:00:00.139) 0:00:20.370 *********** 2025-07-04 18:09:13.932718 | orchestrator | 2025-07-04 18:09:13.932735 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-04 18:09:13.932753 | orchestrator | Friday 04 July 2025 18:08:25 +0000 (0:00:00.217) 0:00:20.588 *********** 2025-07-04 18:09:13.932770 | orchestrator | 2025-07-04 18:09:13.932788 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-07-04 18:09:13.932807 | orchestrator | Friday 04 July 2025 18:08:25 +0000 (0:00:00.322) 0:00:20.910 *********** 2025-07-04 18:09:13.932824 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:13.932844 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:13.932941 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:09:13.932961 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:09:13.932979 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:13.932997 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:09:13.933016 | orchestrator | 2025-07-04 18:09:13.933034 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-07-04 18:09:13.933053 | orchestrator | Friday 04 July 2025 18:08:41 +0000 (0:00:15.879) 0:00:36.790 *********** 2025-07-04 18:09:13.933071 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:13.933088 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:13.933106 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:13.933122 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:09:13.933140 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:09:13.933158 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:09:13.933175 | orchestrator | 2025-07-04 18:09:13.933194 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-04 18:09:13.933213 | orchestrator | Friday 04 July 2025 18:08:42 +0000 (0:00:01.431) 0:00:38.222 *********** 2025-07-04 18:09:13.933231 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:13.933249 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:13.933268 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:09:13.933287 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:09:13.933306 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:09:13.933323 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:13.933342 | orchestrator | 2025-07-04 18:09:13.933362 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-07-04 18:09:13.933381 | orchestrator | Friday 04 July 2025 18:08:46 +0000 (0:00:04.171) 0:00:42.393 *********** 2025-07-04 18:09:13.933429 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-07-04 18:09:13.933470 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-07-04 18:09:13.933489 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-07-04 18:09:13.933507 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-07-04 18:09:13.933524 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-07-04 18:09:13.933542 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-07-04 18:09:13.933561 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-07-04 18:09:13.933580 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-07-04 18:09:13.933599 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-07-04 18:09:13.933618 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-07-04 18:09:13.933635 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-07-04 18:09:13.933654 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-07-04 18:09:13.933673 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-04 18:09:13.933690 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-04 18:09:13.933708 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-04 18:09:13.933725 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-04 18:09:13.933741 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-04 18:09:13.933758 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-04 18:09:13.933775 | orchestrator | 2025-07-04 18:09:13.933792 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-07-04 18:09:13.933808 | orchestrator | Friday 04 July 2025 18:08:54 +0000 (0:00:07.727) 0:00:50.120 *********** 2025-07-04 18:09:13.933824 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-07-04 18:09:13.933841 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:13.933882 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-07-04 18:09:13.933899 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:13.933914 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-07-04 18:09:13.933930 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:13.933947 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-07-04 18:09:13.933963 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-07-04 18:09:13.933979 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-07-04 18:09:13.933994 | orchestrator | 2025-07-04 18:09:13.934010 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-07-04 18:09:13.934103 | orchestrator | Friday 04 July 2025 18:08:56 +0000 (0:00:02.278) 0:00:52.399 *********** 2025-07-04 18:09:13.934122 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-07-04 18:09:13.934139 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:13.934156 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-07-04 18:09:13.934190 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:13.934207 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-07-04 18:09:13.934224 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:13.934240 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-07-04 18:09:13.934257 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-07-04 18:09:13.934274 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-07-04 18:09:13.934290 | orchestrator | 2025-07-04 18:09:13.934306 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-04 18:09:13.934323 | orchestrator | Friday 04 July 2025 18:09:01 +0000 (0:00:04.148) 0:00:56.548 *********** 2025-07-04 18:09:13.934340 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:13.934357 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:13.934374 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:13.934392 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:09:13.934409 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:09:13.934488 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:09:13.934506 | orchestrator | 2025-07-04 18:09:13.934523 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:09:13.934541 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-04 18:09:13.934588 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-04 18:09:13.934608 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-04 18:09:13.934626 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 18:09:13.934643 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 18:09:13.934661 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 18:09:13.934679 | orchestrator | 2025-07-04 18:09:13.934696 | orchestrator | 2025-07-04 18:09:13.934714 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:09:13.934732 | orchestrator | Friday 04 July 2025 18:09:10 +0000 (0:00:09.439) 0:01:05.987 *********** 2025-07-04 18:09:13.934750 | orchestrator | =============================================================================== 2025-07-04 18:09:13.934768 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 15.88s 2025-07-04 18:09:13.934787 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 13.61s 2025-07-04 18:09:13.934806 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.73s 2025-07-04 18:09:13.934825 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.15s 2025-07-04 18:09:13.934842 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.11s 2025-07-04 18:09:13.934884 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.04s 2025-07-04 18:09:13.934901 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.28s 2025-07-04 18:09:13.934918 | orchestrator | module-load : Load modules ---------------------------------------------- 1.92s 2025-07-04 18:09:13.934934 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.84s 2025-07-04 18:09:13.934952 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.78s 2025-07-04 18:09:13.934969 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.75s 2025-07-04 18:09:13.934987 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.43s 2025-07-04 18:09:13.935020 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.42s 2025-07-04 18:09:13.935039 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.38s 2025-07-04 18:09:13.935056 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.36s 2025-07-04 18:09:13.935074 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.09s 2025-07-04 18:09:13.935090 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2025-07-04 18:09:13.935107 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.95s 2025-07-04 18:09:13.935276 | orchestrator | 2025-07-04 18:09:13 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:13.935302 | orchestrator | 2025-07-04 18:09:13 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:16.990363 | orchestrator | 2025-07-04 18:09:16 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:16.992098 | orchestrator | 2025-07-04 18:09:16 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:16.993310 | orchestrator | 2025-07-04 18:09:16 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:09:17.002671 | orchestrator | 2025-07-04 18:09:16 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:17.002736 | orchestrator | 2025-07-04 18:09:16 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:17.002743 | orchestrator | 2025-07-04 18:09:16 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:20.054077 | orchestrator | 2025-07-04 18:09:20 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:20.054172 | orchestrator | 2025-07-04 18:09:20 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:20.060496 | orchestrator | 2025-07-04 18:09:20 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:09:20.063309 | orchestrator | 2025-07-04 18:09:20 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:20.064395 | orchestrator | 2025-07-04 18:09:20 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:20.064559 | orchestrator | 2025-07-04 18:09:20 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:23.102014 | orchestrator | 2025-07-04 18:09:23 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:23.103264 | orchestrator | 2025-07-04 18:09:23 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:23.107302 | orchestrator | 2025-07-04 18:09:23 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:09:23.109117 | orchestrator | 2025-07-04 18:09:23 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:23.111740 | orchestrator | 2025-07-04 18:09:23 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:23.111976 | orchestrator | 2025-07-04 18:09:23 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:26.155965 | orchestrator | 2025-07-04 18:09:26 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:26.157919 | orchestrator | 2025-07-04 18:09:26 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:26.158602 | orchestrator | 2025-07-04 18:09:26 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:09:26.159578 | orchestrator | 2025-07-04 18:09:26 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:26.160251 | orchestrator | 2025-07-04 18:09:26 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:26.160368 | orchestrator | 2025-07-04 18:09:26 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:29.188085 | orchestrator | 2025-07-04 18:09:29 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:29.189004 | orchestrator | 2025-07-04 18:09:29 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:29.190281 | orchestrator | 2025-07-04 18:09:29 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state STARTED 2025-07-04 18:09:29.191006 | orchestrator | 2025-07-04 18:09:29 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:29.192508 | orchestrator | 2025-07-04 18:09:29 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:29.192555 | orchestrator | 2025-07-04 18:09:29 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:32.229237 | orchestrator | 2025-07-04 18:09:32 | INFO  | Task e04ebcd5-d66a-42f9-a7dd-d03b77ba5cdd is in state STARTED 2025-07-04 18:09:32.229605 | orchestrator | 2025-07-04 18:09:32 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:32.234134 | orchestrator | 2025-07-04 18:09:32 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:32.235350 | orchestrator | 2025-07-04 18:09:32 | INFO  | Task 847f3d47-d817-46da-b0dd-ce5c950699c9 is in state SUCCESS 2025-07-04 18:09:32.237375 | orchestrator | 2025-07-04 18:09:32.237414 | orchestrator | 2025-07-04 18:09:32.237473 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-07-04 18:09:32.237486 | orchestrator | 2025-07-04 18:09:32.237497 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-07-04 18:09:32.237508 | orchestrator | Friday 04 July 2025 18:04:39 +0000 (0:00:00.252) 0:00:00.252 *********** 2025-07-04 18:09:32.237520 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:09:32.237532 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:09:32.237543 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:09:32.237553 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.237564 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.237575 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.237586 | orchestrator | 2025-07-04 18:09:32.237597 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-07-04 18:09:32.237608 | orchestrator | Friday 04 July 2025 18:04:39 +0000 (0:00:00.836) 0:00:01.089 *********** 2025-07-04 18:09:32.237619 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.237631 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.237641 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.237652 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.237663 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.237674 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.237685 | orchestrator | 2025-07-04 18:09:32.237696 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-07-04 18:09:32.237707 | orchestrator | Friday 04 July 2025 18:04:40 +0000 (0:00:00.807) 0:00:01.896 *********** 2025-07-04 18:09:32.237718 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.237729 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.237739 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.237750 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.237761 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.237772 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.237784 | orchestrator | 2025-07-04 18:09:32.237794 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-07-04 18:09:32.237824 | orchestrator | Friday 04 July 2025 18:04:41 +0000 (0:00:00.932) 0:00:02.829 *********** 2025-07-04 18:09:32.237854 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:09:32.237865 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:09:32.237876 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:09:32.237886 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.237897 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:32.237908 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:32.237918 | orchestrator | 2025-07-04 18:09:32.237936 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-07-04 18:09:32.237947 | orchestrator | Friday 04 July 2025 18:04:43 +0000 (0:00:02.313) 0:00:05.142 *********** 2025-07-04 18:09:32.237958 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:09:32.237969 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:09:32.237979 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:09:32.237990 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.238001 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:32.238011 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:32.238073 | orchestrator | 2025-07-04 18:09:32.238084 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-07-04 18:09:32.238095 | orchestrator | Friday 04 July 2025 18:04:45 +0000 (0:00:01.427) 0:00:06.570 *********** 2025-07-04 18:09:32.238106 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:09:32.238116 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:09:32.238127 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:09:32.238138 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.238149 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:32.238159 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:32.238170 | orchestrator | 2025-07-04 18:09:32.238181 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-07-04 18:09:32.238200 | orchestrator | Friday 04 July 2025 18:04:46 +0000 (0:00:01.011) 0:00:07.581 *********** 2025-07-04 18:09:32.238218 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.238238 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.238267 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.238340 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.238361 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.238378 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.238393 | orchestrator | 2025-07-04 18:09:32.238410 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-07-04 18:09:32.238428 | orchestrator | Friday 04 July 2025 18:04:46 +0000 (0:00:00.616) 0:00:08.197 *********** 2025-07-04 18:09:32.238445 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.238462 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.238520 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.238541 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.238560 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.238580 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.238598 | orchestrator | 2025-07-04 18:09:32.238616 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-07-04 18:09:32.238636 | orchestrator | Friday 04 July 2025 18:04:47 +0000 (0:00:00.590) 0:00:08.788 *********** 2025-07-04 18:09:32.238657 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-04 18:09:32.238676 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-04 18:09:32.238696 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.238717 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-04 18:09:32.238737 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-04 18:09:32.238756 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.238767 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-04 18:09:32.238778 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-04 18:09:32.238803 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.238815 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-04 18:09:32.238859 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-04 18:09:32.238870 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.238882 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-04 18:09:32.238892 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-04 18:09:32.238903 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.238914 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-04 18:09:32.238925 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-04 18:09:32.238936 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.238946 | orchestrator | 2025-07-04 18:09:32.238957 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-07-04 18:09:32.238968 | orchestrator | Friday 04 July 2025 18:04:48 +0000 (0:00:00.951) 0:00:09.739 *********** 2025-07-04 18:09:32.238979 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.238989 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.239000 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.239011 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.239022 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.239033 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.239043 | orchestrator | 2025-07-04 18:09:32.239055 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-07-04 18:09:32.239066 | orchestrator | Friday 04 July 2025 18:04:49 +0000 (0:00:01.421) 0:00:11.161 *********** 2025-07-04 18:09:32.239077 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:09:32.239088 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:09:32.239099 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:09:32.239109 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.239120 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.239131 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.239142 | orchestrator | 2025-07-04 18:09:32.239152 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-07-04 18:09:32.239163 | orchestrator | Friday 04 July 2025 18:04:50 +0000 (0:00:00.671) 0:00:11.832 *********** 2025-07-04 18:09:32.239174 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:09:32.239185 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.239195 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:09:32.239206 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:32.239217 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:32.239227 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:09:32.239238 | orchestrator | 2025-07-04 18:09:32.239256 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-07-04 18:09:32.239267 | orchestrator | Friday 04 July 2025 18:04:56 +0000 (0:00:05.821) 0:00:17.653 *********** 2025-07-04 18:09:32.239278 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.239289 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.239299 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.239310 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.239321 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.239331 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.239342 | orchestrator | 2025-07-04 18:09:32.239353 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-07-04 18:09:32.239364 | orchestrator | Friday 04 July 2025 18:04:57 +0000 (0:00:00.967) 0:00:18.621 *********** 2025-07-04 18:09:32.239374 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.239385 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.239396 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.239406 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.239423 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.239434 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.239444 | orchestrator | 2025-07-04 18:09:32.239455 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-07-04 18:09:32.239468 | orchestrator | Friday 04 July 2025 18:04:58 +0000 (0:00:01.389) 0:00:20.010 *********** 2025-07-04 18:09:32.239479 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.239489 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.239500 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.239511 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.239522 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.239533 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.239544 | orchestrator | 2025-07-04 18:09:32.239554 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-07-04 18:09:32.239565 | orchestrator | Friday 04 July 2025 18:04:59 +0000 (0:00:00.831) 0:00:20.842 *********** 2025-07-04 18:09:32.239576 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-07-04 18:09:32.239587 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-07-04 18:09:32.239598 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.239609 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-07-04 18:09:32.239620 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-07-04 18:09:32.239630 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.239641 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-07-04 18:09:32.239652 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-07-04 18:09:32.239662 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.239673 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-07-04 18:09:32.239684 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-07-04 18:09:32.239694 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.239705 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-07-04 18:09:32.239716 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-07-04 18:09:32.239726 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.239737 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-07-04 18:09:32.239748 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-07-04 18:09:32.239758 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.239769 | orchestrator | 2025-07-04 18:09:32.239780 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-07-04 18:09:32.239799 | orchestrator | Friday 04 July 2025 18:05:01 +0000 (0:00:01.432) 0:00:22.274 *********** 2025-07-04 18:09:32.239810 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.239821 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.239858 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.239870 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.239881 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.239892 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.239903 | orchestrator | 2025-07-04 18:09:32.239914 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-07-04 18:09:32.239925 | orchestrator | 2025-07-04 18:09:32.239936 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-07-04 18:09:32.239947 | orchestrator | Friday 04 July 2025 18:05:02 +0000 (0:00:01.349) 0:00:23.624 *********** 2025-07-04 18:09:32.239958 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.239969 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.239979 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.239990 | orchestrator | 2025-07-04 18:09:32.240001 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-07-04 18:09:32.240013 | orchestrator | Friday 04 July 2025 18:05:04 +0000 (0:00:02.144) 0:00:25.769 *********** 2025-07-04 18:09:32.240024 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.240041 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.240052 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.240063 | orchestrator | 2025-07-04 18:09:32.240074 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-07-04 18:09:32.240085 | orchestrator | Friday 04 July 2025 18:05:06 +0000 (0:00:01.629) 0:00:27.399 *********** 2025-07-04 18:09:32.240095 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.240106 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.240117 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.240128 | orchestrator | 2025-07-04 18:09:32.240139 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-07-04 18:09:32.240150 | orchestrator | Friday 04 July 2025 18:05:07 +0000 (0:00:01.606) 0:00:29.005 *********** 2025-07-04 18:09:32.240161 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.240172 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.240182 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.240194 | orchestrator | 2025-07-04 18:09:32.240204 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-07-04 18:09:32.240215 | orchestrator | Friday 04 July 2025 18:05:09 +0000 (0:00:01.338) 0:00:30.344 *********** 2025-07-04 18:09:32.240226 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.240237 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.240253 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.240264 | orchestrator | 2025-07-04 18:09:32.240275 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-07-04 18:09:32.240286 | orchestrator | Friday 04 July 2025 18:05:09 +0000 (0:00:00.556) 0:00:30.901 *********** 2025-07-04 18:09:32.240297 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:09:32.240308 | orchestrator | 2025-07-04 18:09:32.240319 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-07-04 18:09:32.240330 | orchestrator | Friday 04 July 2025 18:05:10 +0000 (0:00:00.675) 0:00:31.576 *********** 2025-07-04 18:09:32.240340 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.240351 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.240362 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.240372 | orchestrator | 2025-07-04 18:09:32.240383 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-07-04 18:09:32.240394 | orchestrator | Friday 04 July 2025 18:05:13 +0000 (0:00:03.023) 0:00:34.599 *********** 2025-07-04 18:09:32.240404 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.240415 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.240426 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.240437 | orchestrator | 2025-07-04 18:09:32.240448 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-07-04 18:09:32.240458 | orchestrator | Friday 04 July 2025 18:05:14 +0000 (0:00:01.375) 0:00:35.974 *********** 2025-07-04 18:09:32.240469 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.240480 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.240491 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.240501 | orchestrator | 2025-07-04 18:09:32.240512 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-07-04 18:09:32.240523 | orchestrator | Friday 04 July 2025 18:05:15 +0000 (0:00:00.812) 0:00:36.787 *********** 2025-07-04 18:09:32.240534 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.240544 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.240555 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.240566 | orchestrator | 2025-07-04 18:09:32.240576 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-07-04 18:09:32.240587 | orchestrator | Friday 04 July 2025 18:05:18 +0000 (0:00:02.743) 0:00:39.531 *********** 2025-07-04 18:09:32.240598 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.240609 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.240620 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.240637 | orchestrator | 2025-07-04 18:09:32.240648 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-07-04 18:09:32.240659 | orchestrator | Friday 04 July 2025 18:05:18 +0000 (0:00:00.390) 0:00:39.921 *********** 2025-07-04 18:09:32.240670 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.240681 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.240692 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.240702 | orchestrator | 2025-07-04 18:09:32.240713 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-07-04 18:09:32.240724 | orchestrator | Friday 04 July 2025 18:05:19 +0000 (0:00:00.316) 0:00:40.237 *********** 2025-07-04 18:09:32.240735 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.240745 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:32.240756 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:32.240767 | orchestrator | 2025-07-04 18:09:32.240778 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-07-04 18:09:32.240789 | orchestrator | Friday 04 July 2025 18:05:21 +0000 (0:00:01.987) 0:00:42.224 *********** 2025-07-04 18:09:32.240806 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-04 18:09:32.240818 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-04 18:09:32.240844 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-04 18:09:32.240855 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-04 18:09:32.240866 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-04 18:09:32.240877 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-04 18:09:32.240888 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-04 18:09:32.240899 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-04 18:09:32.240910 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-04 18:09:32.240921 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-04 18:09:32.240932 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-04 18:09:32.240948 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-04 18:09:32.240959 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-04 18:09:32.240970 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-04 18:09:32.240981 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-04 18:09:32.240992 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.241002 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.241014 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.241025 | orchestrator | 2025-07-04 18:09:32.241036 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-07-04 18:09:32.241053 | orchestrator | Friday 04 July 2025 18:06:17 +0000 (0:00:56.440) 0:01:38.665 *********** 2025-07-04 18:09:32.241064 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.241075 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.241086 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.241097 | orchestrator | 2025-07-04 18:09:32.241108 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-07-04 18:09:32.241119 | orchestrator | Friday 04 July 2025 18:06:17 +0000 (0:00:00.326) 0:01:38.992 *********** 2025-07-04 18:09:32.241130 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.241140 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:32.241151 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:32.241162 | orchestrator | 2025-07-04 18:09:32.241172 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-07-04 18:09:32.241183 | orchestrator | Friday 04 July 2025 18:06:18 +0000 (0:00:01.089) 0:01:40.081 *********** 2025-07-04 18:09:32.241194 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.241205 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:32.241216 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:32.241226 | orchestrator | 2025-07-04 18:09:32.241237 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-07-04 18:09:32.241248 | orchestrator | Friday 04 July 2025 18:06:20 +0000 (0:00:01.470) 0:01:41.551 *********** 2025-07-04 18:09:32.241259 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:32.241269 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:32.241280 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.241291 | orchestrator | 2025-07-04 18:09:32.241302 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-07-04 18:09:32.241313 | orchestrator | Friday 04 July 2025 18:06:36 +0000 (0:00:16.112) 0:01:57.663 *********** 2025-07-04 18:09:32.241323 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.241334 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.241345 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.241355 | orchestrator | 2025-07-04 18:09:32.241366 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-07-04 18:09:32.241377 | orchestrator | Friday 04 July 2025 18:06:37 +0000 (0:00:00.634) 0:01:58.298 *********** 2025-07-04 18:09:32.241388 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.241398 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.241409 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.241420 | orchestrator | 2025-07-04 18:09:32.241431 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-07-04 18:09:32.241441 | orchestrator | Friday 04 July 2025 18:06:37 +0000 (0:00:00.617) 0:01:58.915 *********** 2025-07-04 18:09:32.241452 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.241463 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:32.241474 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:32.241485 | orchestrator | 2025-07-04 18:09:32.241501 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-07-04 18:09:32.241513 | orchestrator | Friday 04 July 2025 18:06:38 +0000 (0:00:00.779) 0:01:59.695 *********** 2025-07-04 18:09:32.241524 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.241535 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.241546 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.241556 | orchestrator | 2025-07-04 18:09:32.241581 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-07-04 18:09:32.241592 | orchestrator | Friday 04 July 2025 18:06:39 +0000 (0:00:00.939) 0:02:00.634 *********** 2025-07-04 18:09:32.241613 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.241624 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.241643 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.241662 | orchestrator | 2025-07-04 18:09:32.241678 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-07-04 18:09:32.241695 | orchestrator | Friday 04 July 2025 18:06:39 +0000 (0:00:00.338) 0:02:00.973 *********** 2025-07-04 18:09:32.241713 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.241742 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:32.241762 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:32.241781 | orchestrator | 2025-07-04 18:09:32.241793 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-07-04 18:09:32.241804 | orchestrator | Friday 04 July 2025 18:06:40 +0000 (0:00:00.716) 0:02:01.689 *********** 2025-07-04 18:09:32.241814 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.241825 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:32.241853 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:32.241864 | orchestrator | 2025-07-04 18:09:32.241874 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-07-04 18:09:32.241885 | orchestrator | Friday 04 July 2025 18:06:41 +0000 (0:00:00.677) 0:02:02.367 *********** 2025-07-04 18:09:32.241896 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.241906 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:32.241917 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:32.241929 | orchestrator | 2025-07-04 18:09:32.241939 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-07-04 18:09:32.241950 | orchestrator | Friday 04 July 2025 18:06:42 +0000 (0:00:01.175) 0:02:03.542 *********** 2025-07-04 18:09:32.241961 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:09:32.241972 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:09:32.241983 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:09:32.241993 | orchestrator | 2025-07-04 18:09:32.242004 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-07-04 18:09:32.242061 | orchestrator | Friday 04 July 2025 18:06:43 +0000 (0:00:00.852) 0:02:04.395 *********** 2025-07-04 18:09:32.242075 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.242086 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.242097 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.242108 | orchestrator | 2025-07-04 18:09:32.242119 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-07-04 18:09:32.242130 | orchestrator | Friday 04 July 2025 18:06:43 +0000 (0:00:00.259) 0:02:04.655 *********** 2025-07-04 18:09:32.242141 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.242152 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.242163 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.242174 | orchestrator | 2025-07-04 18:09:32.242185 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-07-04 18:09:32.242196 | orchestrator | Friday 04 July 2025 18:06:43 +0000 (0:00:00.260) 0:02:04.915 *********** 2025-07-04 18:09:32.242206 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.242217 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.242228 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.242239 | orchestrator | 2025-07-04 18:09:32.243036 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-07-04 18:09:32.243083 | orchestrator | Friday 04 July 2025 18:06:44 +0000 (0:00:00.809) 0:02:05.724 *********** 2025-07-04 18:09:32.243097 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.243110 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.243122 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.243134 | orchestrator | 2025-07-04 18:09:32.243147 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-07-04 18:09:32.243159 | orchestrator | Friday 04 July 2025 18:06:45 +0000 (0:00:00.837) 0:02:06.562 *********** 2025-07-04 18:09:32.243172 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-04 18:09:32.243184 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-04 18:09:32.243196 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-04 18:09:32.243209 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-04 18:09:32.243233 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-04 18:09:32.243247 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-04 18:09:32.243260 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-04 18:09:32.243271 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-04 18:09:32.243282 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-04 18:09:32.243292 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-04 18:09:32.243303 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-07-04 18:09:32.243313 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-04 18:09:32.243337 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-07-04 18:09:32.243348 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-04 18:09:32.243357 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-04 18:09:32.243367 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-04 18:09:32.243377 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-04 18:09:32.243386 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-04 18:09:32.243396 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-04 18:09:32.243405 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-04 18:09:32.243415 | orchestrator | 2025-07-04 18:09:32.243425 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-07-04 18:09:32.243434 | orchestrator | 2025-07-04 18:09:32.243444 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-07-04 18:09:32.243453 | orchestrator | Friday 04 July 2025 18:06:48 +0000 (0:00:03.256) 0:02:09.818 *********** 2025-07-04 18:09:32.243463 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:09:32.243477 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:09:32.243487 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:09:32.243496 | orchestrator | 2025-07-04 18:09:32.243506 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-07-04 18:09:32.243515 | orchestrator | Friday 04 July 2025 18:06:49 +0000 (0:00:00.532) 0:02:10.351 *********** 2025-07-04 18:09:32.243525 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:09:32.243535 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:09:32.243544 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:09:32.243553 | orchestrator | 2025-07-04 18:09:32.243563 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-07-04 18:09:32.243573 | orchestrator | Friday 04 July 2025 18:06:49 +0000 (0:00:00.636) 0:02:10.987 *********** 2025-07-04 18:09:32.243583 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:09:32.243592 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:09:32.243602 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:09:32.243611 | orchestrator | 2025-07-04 18:09:32.243621 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-07-04 18:09:32.243630 | orchestrator | Friday 04 July 2025 18:06:50 +0000 (0:00:00.282) 0:02:11.270 *********** 2025-07-04 18:09:32.243640 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:09:32.243650 | orchestrator | 2025-07-04 18:09:32.243659 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-07-04 18:09:32.243669 | orchestrator | Friday 04 July 2025 18:06:50 +0000 (0:00:00.595) 0:02:11.865 *********** 2025-07-04 18:09:32.243690 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.243700 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.243709 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.243719 | orchestrator | 2025-07-04 18:09:32.243728 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-07-04 18:09:32.243738 | orchestrator | Friday 04 July 2025 18:06:50 +0000 (0:00:00.301) 0:02:12.167 *********** 2025-07-04 18:09:32.243747 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.243757 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.243767 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.243776 | orchestrator | 2025-07-04 18:09:32.243786 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-07-04 18:09:32.243795 | orchestrator | Friday 04 July 2025 18:06:51 +0000 (0:00:00.343) 0:02:12.511 *********** 2025-07-04 18:09:32.243805 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.243814 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.243824 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.243881 | orchestrator | 2025-07-04 18:09:32.243892 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-07-04 18:09:32.243902 | orchestrator | Friday 04 July 2025 18:06:51 +0000 (0:00:00.270) 0:02:12.781 *********** 2025-07-04 18:09:32.243912 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:09:32.243921 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:09:32.243931 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:09:32.243940 | orchestrator | 2025-07-04 18:09:32.243950 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-07-04 18:09:32.243959 | orchestrator | Friday 04 July 2025 18:06:53 +0000 (0:00:01.438) 0:02:14.219 *********** 2025-07-04 18:09:32.243969 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:09:32.243978 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:09:32.243988 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:09:32.243997 | orchestrator | 2025-07-04 18:09:32.244007 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-04 18:09:32.244016 | orchestrator | 2025-07-04 18:09:32.244026 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-04 18:09:32.244035 | orchestrator | Friday 04 July 2025 18:07:02 +0000 (0:00:09.645) 0:02:23.865 *********** 2025-07-04 18:09:32.244045 | orchestrator | ok: [testbed-manager] 2025-07-04 18:09:32.244055 | orchestrator | 2025-07-04 18:09:32.244064 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-04 18:09:32.244074 | orchestrator | Friday 04 July 2025 18:07:03 +0000 (0:00:00.925) 0:02:24.790 *********** 2025-07-04 18:09:32.244083 | orchestrator | changed: [testbed-manager] 2025-07-04 18:09:32.244093 | orchestrator | 2025-07-04 18:09:32.244102 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-04 18:09:32.244112 | orchestrator | Friday 04 July 2025 18:07:04 +0000 (0:00:00.477) 0:02:25.267 *********** 2025-07-04 18:09:32.244121 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-04 18:09:32.244131 | orchestrator | 2025-07-04 18:09:32.244146 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-04 18:09:32.244156 | orchestrator | Friday 04 July 2025 18:07:05 +0000 (0:00:01.208) 0:02:26.475 *********** 2025-07-04 18:09:32.244165 | orchestrator | changed: [testbed-manager] 2025-07-04 18:09:32.244173 | orchestrator | 2025-07-04 18:09:32.244181 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-04 18:09:32.244188 | orchestrator | Friday 04 July 2025 18:07:06 +0000 (0:00:01.286) 0:02:27.762 *********** 2025-07-04 18:09:32.244196 | orchestrator | changed: [testbed-manager] 2025-07-04 18:09:32.244204 | orchestrator | 2025-07-04 18:09:32.244212 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-04 18:09:32.244219 | orchestrator | Friday 04 July 2025 18:07:07 +0000 (0:00:00.743) 0:02:28.505 *********** 2025-07-04 18:09:32.244227 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-04 18:09:32.244240 | orchestrator | 2025-07-04 18:09:32.244248 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-04 18:09:32.244256 | orchestrator | Friday 04 July 2025 18:07:08 +0000 (0:00:01.637) 0:02:30.143 *********** 2025-07-04 18:09:32.244264 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-04 18:09:32.244271 | orchestrator | 2025-07-04 18:09:32.244279 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-04 18:09:32.244287 | orchestrator | Friday 04 July 2025 18:07:09 +0000 (0:00:00.793) 0:02:30.936 *********** 2025-07-04 18:09:32.244294 | orchestrator | changed: [testbed-manager] 2025-07-04 18:09:32.244302 | orchestrator | 2025-07-04 18:09:32.244310 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-04 18:09:32.244321 | orchestrator | Friday 04 July 2025 18:07:10 +0000 (0:00:00.453) 0:02:31.390 *********** 2025-07-04 18:09:32.244329 | orchestrator | changed: [testbed-manager] 2025-07-04 18:09:32.244337 | orchestrator | 2025-07-04 18:09:32.244344 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-07-04 18:09:32.244352 | orchestrator | 2025-07-04 18:09:32.244360 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-07-04 18:09:32.244368 | orchestrator | Friday 04 July 2025 18:07:10 +0000 (0:00:00.409) 0:02:31.799 *********** 2025-07-04 18:09:32.244375 | orchestrator | ok: [testbed-manager] 2025-07-04 18:09:32.244383 | orchestrator | 2025-07-04 18:09:32.244391 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-07-04 18:09:32.244398 | orchestrator | Friday 04 July 2025 18:07:10 +0000 (0:00:00.154) 0:02:31.953 *********** 2025-07-04 18:09:32.244406 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-07-04 18:09:32.244414 | orchestrator | 2025-07-04 18:09:32.244421 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-07-04 18:09:32.244429 | orchestrator | Friday 04 July 2025 18:07:11 +0000 (0:00:00.634) 0:02:32.588 *********** 2025-07-04 18:09:32.244437 | orchestrator | ok: [testbed-manager] 2025-07-04 18:09:32.244444 | orchestrator | 2025-07-04 18:09:32.244452 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-07-04 18:09:32.244460 | orchestrator | Friday 04 July 2025 18:07:12 +0000 (0:00:01.008) 0:02:33.597 *********** 2025-07-04 18:09:32.244468 | orchestrator | ok: [testbed-manager] 2025-07-04 18:09:32.244475 | orchestrator | 2025-07-04 18:09:32.244483 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-07-04 18:09:32.244491 | orchestrator | Friday 04 July 2025 18:07:15 +0000 (0:00:02.825) 0:02:36.422 *********** 2025-07-04 18:09:32.244498 | orchestrator | changed: [testbed-manager] 2025-07-04 18:09:32.244506 | orchestrator | 2025-07-04 18:09:32.244514 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-07-04 18:09:32.244521 | orchestrator | Friday 04 July 2025 18:07:16 +0000 (0:00:00.860) 0:02:37.282 *********** 2025-07-04 18:09:32.244529 | orchestrator | ok: [testbed-manager] 2025-07-04 18:09:32.244537 | orchestrator | 2025-07-04 18:09:32.244544 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-07-04 18:09:32.244552 | orchestrator | Friday 04 July 2025 18:07:16 +0000 (0:00:00.595) 0:02:37.878 *********** 2025-07-04 18:09:32.244560 | orchestrator | changed: [testbed-manager] 2025-07-04 18:09:32.244567 | orchestrator | 2025-07-04 18:09:32.244575 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-07-04 18:09:32.244583 | orchestrator | Friday 04 July 2025 18:07:24 +0000 (0:00:08.089) 0:02:45.968 *********** 2025-07-04 18:09:32.244590 | orchestrator | changed: [testbed-manager] 2025-07-04 18:09:32.244598 | orchestrator | 2025-07-04 18:09:32.244606 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-07-04 18:09:32.244614 | orchestrator | Friday 04 July 2025 18:07:39 +0000 (0:00:14.844) 0:03:00.812 *********** 2025-07-04 18:09:32.244621 | orchestrator | ok: [testbed-manager] 2025-07-04 18:09:32.244629 | orchestrator | 2025-07-04 18:09:32.244637 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-07-04 18:09:32.244649 | orchestrator | 2025-07-04 18:09:32.244657 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-07-04 18:09:32.244665 | orchestrator | Friday 04 July 2025 18:07:40 +0000 (0:00:00.812) 0:03:01.625 *********** 2025-07-04 18:09:32.244672 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.244680 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.244688 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.244695 | orchestrator | 2025-07-04 18:09:32.244703 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-07-04 18:09:32.244711 | orchestrator | Friday 04 July 2025 18:07:41 +0000 (0:00:00.671) 0:03:02.297 *********** 2025-07-04 18:09:32.244719 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.244726 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.244734 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.244742 | orchestrator | 2025-07-04 18:09:32.244750 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-07-04 18:09:32.244758 | orchestrator | Friday 04 July 2025 18:07:41 +0000 (0:00:00.382) 0:03:02.679 *********** 2025-07-04 18:09:32.244765 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:09:32.244773 | orchestrator | 2025-07-04 18:09:32.244781 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-07-04 18:09:32.244793 | orchestrator | Friday 04 July 2025 18:07:42 +0000 (0:00:00.568) 0:03:03.248 *********** 2025-07-04 18:09:32.244802 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-04 18:09:32.244809 | orchestrator | 2025-07-04 18:09:32.244817 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-07-04 18:09:32.244825 | orchestrator | Friday 04 July 2025 18:07:43 +0000 (0:00:01.551) 0:03:04.800 *********** 2025-07-04 18:09:32.244847 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 18:09:32.244855 | orchestrator | 2025-07-04 18:09:32.244863 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-07-04 18:09:32.244871 | orchestrator | Friday 04 July 2025 18:07:44 +0000 (0:00:01.046) 0:03:05.847 *********** 2025-07-04 18:09:32.244878 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.244886 | orchestrator | 2025-07-04 18:09:32.244894 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-07-04 18:09:32.244902 | orchestrator | Friday 04 July 2025 18:07:44 +0000 (0:00:00.242) 0:03:06.089 *********** 2025-07-04 18:09:32.244910 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 18:09:32.244918 | orchestrator | 2025-07-04 18:09:32.244925 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-07-04 18:09:32.244933 | orchestrator | Friday 04 July 2025 18:07:46 +0000 (0:00:01.302) 0:03:07.392 *********** 2025-07-04 18:09:32.244941 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.244948 | orchestrator | 2025-07-04 18:09:32.244956 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-07-04 18:09:32.244967 | orchestrator | Friday 04 July 2025 18:07:46 +0000 (0:00:00.297) 0:03:07.690 *********** 2025-07-04 18:09:32.244975 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.244983 | orchestrator | 2025-07-04 18:09:32.244991 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-07-04 18:09:32.244998 | orchestrator | Friday 04 July 2025 18:07:46 +0000 (0:00:00.260) 0:03:07.950 *********** 2025-07-04 18:09:32.245006 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.245014 | orchestrator | 2025-07-04 18:09:32.245022 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-07-04 18:09:32.245029 | orchestrator | Friday 04 July 2025 18:07:46 +0000 (0:00:00.248) 0:03:08.199 *********** 2025-07-04 18:09:32.245037 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.245045 | orchestrator | 2025-07-04 18:09:32.245052 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-07-04 18:09:32.245065 | orchestrator | Friday 04 July 2025 18:07:47 +0000 (0:00:00.248) 0:03:08.448 *********** 2025-07-04 18:09:32.245073 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-04 18:09:32.245081 | orchestrator | 2025-07-04 18:09:32.245088 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-07-04 18:09:32.245096 | orchestrator | Friday 04 July 2025 18:07:51 +0000 (0:00:04.609) 0:03:13.057 *********** 2025-07-04 18:09:32.245104 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-07-04 18:09:32.245112 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-07-04 18:09:32.245120 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-07-04 18:09:32.245127 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-07-04 18:09:32.245135 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-07-04 18:09:32.245143 | orchestrator | 2025-07-04 18:09:32.245150 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-07-04 18:09:32.245158 | orchestrator | Friday 04 July 2025 18:08:55 +0000 (0:01:03.529) 0:04:16.587 *********** 2025-07-04 18:09:32.245166 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 18:09:32.245173 | orchestrator | 2025-07-04 18:09:32.245181 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-07-04 18:09:32.245189 | orchestrator | Friday 04 July 2025 18:08:56 +0000 (0:00:01.500) 0:04:18.088 *********** 2025-07-04 18:09:32.245197 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-04 18:09:32.245204 | orchestrator | 2025-07-04 18:09:32.245212 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-07-04 18:09:32.245220 | orchestrator | Friday 04 July 2025 18:08:58 +0000 (0:00:01.764) 0:04:19.852 *********** 2025-07-04 18:09:32.245227 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-04 18:09:32.245235 | orchestrator | 2025-07-04 18:09:32.245243 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-07-04 18:09:32.245251 | orchestrator | Friday 04 July 2025 18:09:00 +0000 (0:00:01.962) 0:04:21.815 *********** 2025-07-04 18:09:32.245258 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.245266 | orchestrator | 2025-07-04 18:09:32.245274 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-07-04 18:09:32.245282 | orchestrator | Friday 04 July 2025 18:09:00 +0000 (0:00:00.214) 0:04:22.030 *********** 2025-07-04 18:09:32.245289 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-07-04 18:09:32.245297 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-07-04 18:09:32.245305 | orchestrator | 2025-07-04 18:09:32.245313 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-07-04 18:09:32.245321 | orchestrator | Friday 04 July 2025 18:09:03 +0000 (0:00:02.515) 0:04:24.545 *********** 2025-07-04 18:09:32.245328 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.245336 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.245344 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.245351 | orchestrator | 2025-07-04 18:09:32.245359 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-07-04 18:09:32.245367 | orchestrator | Friday 04 July 2025 18:09:03 +0000 (0:00:00.432) 0:04:24.977 *********** 2025-07-04 18:09:32.245375 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.245383 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.245391 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.245398 | orchestrator | 2025-07-04 18:09:32.245410 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-07-04 18:09:32.245418 | orchestrator | 2025-07-04 18:09:32.245426 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-07-04 18:09:32.245434 | orchestrator | Friday 04 July 2025 18:09:04 +0000 (0:00:00.981) 0:04:25.958 *********** 2025-07-04 18:09:32.245442 | orchestrator | ok: [testbed-manager] 2025-07-04 18:09:32.245454 | orchestrator | 2025-07-04 18:09:32.245462 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-07-04 18:09:32.245470 | orchestrator | Friday 04 July 2025 18:09:05 +0000 (0:00:00.271) 0:04:26.229 *********** 2025-07-04 18:09:32.245478 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-07-04 18:09:32.245486 | orchestrator | 2025-07-04 18:09:32.245493 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-07-04 18:09:32.245501 | orchestrator | Friday 04 July 2025 18:09:05 +0000 (0:00:00.210) 0:04:26.440 *********** 2025-07-04 18:09:32.245509 | orchestrator | changed: [testbed-manager] 2025-07-04 18:09:32.245516 | orchestrator | 2025-07-04 18:09:32.245524 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-07-04 18:09:32.245532 | orchestrator | 2025-07-04 18:09:32.245540 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-07-04 18:09:32.245548 | orchestrator | Friday 04 July 2025 18:09:11 +0000 (0:00:06.388) 0:04:32.828 *********** 2025-07-04 18:09:32.245555 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:09:32.245563 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:09:32.245571 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:09:32.245579 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:09:32.245590 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:09:32.245598 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:09:32.245606 | orchestrator | 2025-07-04 18:09:32.245613 | orchestrator | TASK [Manage labels] *********************************************************** 2025-07-04 18:09:32.245621 | orchestrator | Friday 04 July 2025 18:09:12 +0000 (0:00:00.881) 0:04:33.710 *********** 2025-07-04 18:09:32.245629 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-04 18:09:32.245637 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-04 18:09:32.245644 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-04 18:09:32.245652 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-04 18:09:32.245660 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-04 18:09:32.245667 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-04 18:09:32.245675 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-04 18:09:32.245683 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-04 18:09:32.245690 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-04 18:09:32.245698 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-04 18:09:32.245706 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-04 18:09:32.245714 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-04 18:09:32.245721 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-04 18:09:32.245729 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-04 18:09:32.245737 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-04 18:09:32.245744 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-04 18:09:32.245752 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-04 18:09:32.245760 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-04 18:09:32.245768 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-04 18:09:32.245775 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-04 18:09:32.245788 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-04 18:09:32.245796 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-04 18:09:32.245804 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-04 18:09:32.245811 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-04 18:09:32.245819 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-04 18:09:32.245827 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-04 18:09:32.245845 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-04 18:09:32.245853 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-04 18:09:32.245861 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-04 18:09:32.245869 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-04 18:09:32.245877 | orchestrator | 2025-07-04 18:09:32.245889 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-07-04 18:09:32.245897 | orchestrator | Friday 04 July 2025 18:09:29 +0000 (0:00:17.218) 0:04:50.929 *********** 2025-07-04 18:09:32.245905 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.245913 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.245921 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.245928 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.245936 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.245944 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.245952 | orchestrator | 2025-07-04 18:09:32.245959 | orchestrator | TASK [Manage taints] *********************************************************** 2025-07-04 18:09:32.245967 | orchestrator | Friday 04 July 2025 18:09:30 +0000 (0:00:00.699) 0:04:51.628 *********** 2025-07-04 18:09:32.245975 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:09:32.245983 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:09:32.245990 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:09:32.245998 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:09:32.246006 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:09:32.246039 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:09:32.246049 | orchestrator | 2025-07-04 18:09:32.246057 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:09:32.246064 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:09:32.246077 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-07-04 18:09:32.246086 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-07-04 18:09:32.246094 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-07-04 18:09:32.246101 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-07-04 18:09:32.246109 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-07-04 18:09:32.246117 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-07-04 18:09:32.246125 | orchestrator | 2025-07-04 18:09:32.246133 | orchestrator | 2025-07-04 18:09:32.246141 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:09:32.246152 | orchestrator | Friday 04 July 2025 18:09:30 +0000 (0:00:00.428) 0:04:52.056 *********** 2025-07-04 18:09:32.246160 | orchestrator | =============================================================================== 2025-07-04 18:09:32.246168 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 63.53s 2025-07-04 18:09:32.246176 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.44s 2025-07-04 18:09:32.246184 | orchestrator | Manage labels ---------------------------------------------------------- 17.22s 2025-07-04 18:09:32.246191 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 16.11s 2025-07-04 18:09:32.246199 | orchestrator | kubectl : Install required packages ------------------------------------ 14.84s 2025-07-04 18:09:32.246207 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.65s 2025-07-04 18:09:32.246215 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.09s 2025-07-04 18:09:32.246223 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.39s 2025-07-04 18:09:32.246230 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.82s 2025-07-04 18:09:32.246238 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.61s 2025-07-04 18:09:32.246246 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.26s 2025-07-04 18:09:32.246254 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.02s 2025-07-04 18:09:32.246262 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.83s 2025-07-04 18:09:32.246269 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.74s 2025-07-04 18:09:32.246277 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.52s 2025-07-04 18:09:32.246285 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.31s 2025-07-04 18:09:32.246293 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.14s 2025-07-04 18:09:32.246301 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.99s 2025-07-04 18:09:32.246308 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 1.96s 2025-07-04 18:09:32.246316 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.76s 2025-07-04 18:09:32.246324 | orchestrator | 2025-07-04 18:09:32 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:32.246336 | orchestrator | 2025-07-04 18:09:32 | INFO  | Task 12c96f68-0fd2-49d3-8927-ff967ba63c61 is in state STARTED 2025-07-04 18:09:32.246344 | orchestrator | 2025-07-04 18:09:32 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:32.246352 | orchestrator | 2025-07-04 18:09:32 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:35.301567 | orchestrator | 2025-07-04 18:09:35 | INFO  | Task e04ebcd5-d66a-42f9-a7dd-d03b77ba5cdd is in state STARTED 2025-07-04 18:09:35.301930 | orchestrator | 2025-07-04 18:09:35 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:35.302610 | orchestrator | 2025-07-04 18:09:35 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:35.305148 | orchestrator | 2025-07-04 18:09:35 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:35.307325 | orchestrator | 2025-07-04 18:09:35 | INFO  | Task 12c96f68-0fd2-49d3-8927-ff967ba63c61 is in state STARTED 2025-07-04 18:09:35.307880 | orchestrator | 2025-07-04 18:09:35 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:35.308568 | orchestrator | 2025-07-04 18:09:35 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:38.352755 | orchestrator | 2025-07-04 18:09:38 | INFO  | Task e04ebcd5-d66a-42f9-a7dd-d03b77ba5cdd is in state STARTED 2025-07-04 18:09:38.352906 | orchestrator | 2025-07-04 18:09:38 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:38.354972 | orchestrator | 2025-07-04 18:09:38 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:38.356134 | orchestrator | 2025-07-04 18:09:38 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:38.361393 | orchestrator | 2025-07-04 18:09:38 | INFO  | Task 12c96f68-0fd2-49d3-8927-ff967ba63c61 is in state STARTED 2025-07-04 18:09:38.363329 | orchestrator | 2025-07-04 18:09:38 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:38.363347 | orchestrator | 2025-07-04 18:09:38 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:41.395345 | orchestrator | 2025-07-04 18:09:41 | INFO  | Task e04ebcd5-d66a-42f9-a7dd-d03b77ba5cdd is in state SUCCESS 2025-07-04 18:09:41.395646 | orchestrator | 2025-07-04 18:09:41 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:41.399028 | orchestrator | 2025-07-04 18:09:41 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:41.400856 | orchestrator | 2025-07-04 18:09:41 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:41.401615 | orchestrator | 2025-07-04 18:09:41 | INFO  | Task 12c96f68-0fd2-49d3-8927-ff967ba63c61 is in state STARTED 2025-07-04 18:09:41.406483 | orchestrator | 2025-07-04 18:09:41 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:41.406527 | orchestrator | 2025-07-04 18:09:41 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:44.467486 | orchestrator | 2025-07-04 18:09:44 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:44.469767 | orchestrator | 2025-07-04 18:09:44 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:44.472176 | orchestrator | 2025-07-04 18:09:44 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:44.473898 | orchestrator | 2025-07-04 18:09:44 | INFO  | Task 12c96f68-0fd2-49d3-8927-ff967ba63c61 is in state SUCCESS 2025-07-04 18:09:44.476218 | orchestrator | 2025-07-04 18:09:44 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:44.476684 | orchestrator | 2025-07-04 18:09:44 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:47.530808 | orchestrator | 2025-07-04 18:09:47 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:47.530958 | orchestrator | 2025-07-04 18:09:47 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:47.530974 | orchestrator | 2025-07-04 18:09:47 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:47.532453 | orchestrator | 2025-07-04 18:09:47 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:47.532495 | orchestrator | 2025-07-04 18:09:47 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:50.572391 | orchestrator | 2025-07-04 18:09:50 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:50.574530 | orchestrator | 2025-07-04 18:09:50 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:50.577115 | orchestrator | 2025-07-04 18:09:50 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:50.578769 | orchestrator | 2025-07-04 18:09:50 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:50.579004 | orchestrator | 2025-07-04 18:09:50 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:53.615694 | orchestrator | 2025-07-04 18:09:53 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:53.615754 | orchestrator | 2025-07-04 18:09:53 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:53.617904 | orchestrator | 2025-07-04 18:09:53 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:53.619240 | orchestrator | 2025-07-04 18:09:53 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:53.619260 | orchestrator | 2025-07-04 18:09:53 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:56.677412 | orchestrator | 2025-07-04 18:09:56 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:56.679135 | orchestrator | 2025-07-04 18:09:56 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:56.681257 | orchestrator | 2025-07-04 18:09:56 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:56.681965 | orchestrator | 2025-07-04 18:09:56 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:56.682000 | orchestrator | 2025-07-04 18:09:56 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:09:59.720470 | orchestrator | 2025-07-04 18:09:59 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:09:59.721049 | orchestrator | 2025-07-04 18:09:59 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:09:59.721931 | orchestrator | 2025-07-04 18:09:59 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:09:59.722942 | orchestrator | 2025-07-04 18:09:59 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:09:59.722987 | orchestrator | 2025-07-04 18:09:59 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:02.769724 | orchestrator | 2025-07-04 18:10:02 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:02.770686 | orchestrator | 2025-07-04 18:10:02 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:02.773034 | orchestrator | 2025-07-04 18:10:02 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:02.773840 | orchestrator | 2025-07-04 18:10:02 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:02.774085 | orchestrator | 2025-07-04 18:10:02 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:05.822001 | orchestrator | 2025-07-04 18:10:05 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:05.823517 | orchestrator | 2025-07-04 18:10:05 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:05.827909 | orchestrator | 2025-07-04 18:10:05 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:05.828291 | orchestrator | 2025-07-04 18:10:05 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:05.828331 | orchestrator | 2025-07-04 18:10:05 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:08.881195 | orchestrator | 2025-07-04 18:10:08 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:08.884462 | orchestrator | 2025-07-04 18:10:08 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:08.885375 | orchestrator | 2025-07-04 18:10:08 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:08.887433 | orchestrator | 2025-07-04 18:10:08 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:08.887489 | orchestrator | 2025-07-04 18:10:08 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:11.920568 | orchestrator | 2025-07-04 18:10:11 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:11.921392 | orchestrator | 2025-07-04 18:10:11 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:11.921690 | orchestrator | 2025-07-04 18:10:11 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:11.922455 | orchestrator | 2025-07-04 18:10:11 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:11.922479 | orchestrator | 2025-07-04 18:10:11 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:14.953140 | orchestrator | 2025-07-04 18:10:14 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:14.953194 | orchestrator | 2025-07-04 18:10:14 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:14.953201 | orchestrator | 2025-07-04 18:10:14 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:14.953207 | orchestrator | 2025-07-04 18:10:14 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:14.953211 | orchestrator | 2025-07-04 18:10:14 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:18.001168 | orchestrator | 2025-07-04 18:10:17 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:18.002747 | orchestrator | 2025-07-04 18:10:17 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:18.004085 | orchestrator | 2025-07-04 18:10:18 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:18.005583 | orchestrator | 2025-07-04 18:10:18 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:18.005619 | orchestrator | 2025-07-04 18:10:18 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:21.047349 | orchestrator | 2025-07-04 18:10:21 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:21.050572 | orchestrator | 2025-07-04 18:10:21 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:21.055155 | orchestrator | 2025-07-04 18:10:21 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:21.060492 | orchestrator | 2025-07-04 18:10:21 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:21.060828 | orchestrator | 2025-07-04 18:10:21 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:24.119734 | orchestrator | 2025-07-04 18:10:24 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:24.123639 | orchestrator | 2025-07-04 18:10:24 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:24.125764 | orchestrator | 2025-07-04 18:10:24 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:24.128817 | orchestrator | 2025-07-04 18:10:24 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:24.128865 | orchestrator | 2025-07-04 18:10:24 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:27.177848 | orchestrator | 2025-07-04 18:10:27 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:27.180325 | orchestrator | 2025-07-04 18:10:27 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:27.182739 | orchestrator | 2025-07-04 18:10:27 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:27.185990 | orchestrator | 2025-07-04 18:10:27 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:27.186104 | orchestrator | 2025-07-04 18:10:27 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:30.226488 | orchestrator | 2025-07-04 18:10:30 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:30.233608 | orchestrator | 2025-07-04 18:10:30 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:30.234494 | orchestrator | 2025-07-04 18:10:30 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:30.237289 | orchestrator | 2025-07-04 18:10:30 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:30.237363 | orchestrator | 2025-07-04 18:10:30 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:33.293110 | orchestrator | 2025-07-04 18:10:33 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:33.295302 | orchestrator | 2025-07-04 18:10:33 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:33.297927 | orchestrator | 2025-07-04 18:10:33 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:33.300663 | orchestrator | 2025-07-04 18:10:33 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:33.300992 | orchestrator | 2025-07-04 18:10:33 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:36.349296 | orchestrator | 2025-07-04 18:10:36 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:36.353357 | orchestrator | 2025-07-04 18:10:36 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:36.356274 | orchestrator | 2025-07-04 18:10:36 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:36.358445 | orchestrator | 2025-07-04 18:10:36 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:36.359032 | orchestrator | 2025-07-04 18:10:36 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:39.417195 | orchestrator | 2025-07-04 18:10:39 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:39.417318 | orchestrator | 2025-07-04 18:10:39 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:39.417347 | orchestrator | 2025-07-04 18:10:39 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:39.417367 | orchestrator | 2025-07-04 18:10:39 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:39.417386 | orchestrator | 2025-07-04 18:10:39 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:42.467191 | orchestrator | 2025-07-04 18:10:42 | INFO  | Task c24ea607-8fb2-4caa-8cdb-dbeb4a8465dc is in state STARTED 2025-07-04 18:10:42.467275 | orchestrator | 2025-07-04 18:10:42 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:42.467290 | orchestrator | 2025-07-04 18:10:42 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:42.468674 | orchestrator | 2025-07-04 18:10:42 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:42.470342 | orchestrator | 2025-07-04 18:10:42 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:42.470368 | orchestrator | 2025-07-04 18:10:42 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:45.522932 | orchestrator | 2025-07-04 18:10:45 | INFO  | Task c24ea607-8fb2-4caa-8cdb-dbeb4a8465dc is in state STARTED 2025-07-04 18:10:45.523946 | orchestrator | 2025-07-04 18:10:45 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:45.525974 | orchestrator | 2025-07-04 18:10:45 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state STARTED 2025-07-04 18:10:45.527353 | orchestrator | 2025-07-04 18:10:45 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:45.528430 | orchestrator | 2025-07-04 18:10:45 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:45.528472 | orchestrator | 2025-07-04 18:10:45 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:48.570145 | orchestrator | 2025-07-04 18:10:48 | INFO  | Task c24ea607-8fb2-4caa-8cdb-dbeb4a8465dc is in state STARTED 2025-07-04 18:10:48.570415 | orchestrator | 2025-07-04 18:10:48 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:48.570943 | orchestrator | 2025-07-04 18:10:48 | INFO  | Task 9a30792d-643f-4105-babc-b76508c80c19 is in state SUCCESS 2025-07-04 18:10:48.573068 | orchestrator | 2025-07-04 18:10:48.573106 | orchestrator | 2025-07-04 18:10:48.573118 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-07-04 18:10:48.573130 | orchestrator | 2025-07-04 18:10:48.573141 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-04 18:10:48.573152 | orchestrator | Friday 04 July 2025 18:09:35 +0000 (0:00:00.202) 0:00:00.202 *********** 2025-07-04 18:10:48.573164 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-04 18:10:48.573175 | orchestrator | 2025-07-04 18:10:48.573186 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-04 18:10:48.573197 | orchestrator | Friday 04 July 2025 18:09:36 +0000 (0:00:00.903) 0:00:01.106 *********** 2025-07-04 18:10:48.573207 | orchestrator | changed: [testbed-manager] 2025-07-04 18:10:48.573219 | orchestrator | 2025-07-04 18:10:48.573229 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-07-04 18:10:48.573240 | orchestrator | Friday 04 July 2025 18:09:37 +0000 (0:00:01.357) 0:00:02.463 *********** 2025-07-04 18:10:48.573251 | orchestrator | changed: [testbed-manager] 2025-07-04 18:10:48.573261 | orchestrator | 2025-07-04 18:10:48.573272 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:10:48.573283 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:10:48.573295 | orchestrator | 2025-07-04 18:10:48.573306 | orchestrator | 2025-07-04 18:10:48.573316 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:10:48.573327 | orchestrator | Friday 04 July 2025 18:09:38 +0000 (0:00:00.526) 0:00:02.990 *********** 2025-07-04 18:10:48.573338 | orchestrator | =============================================================================== 2025-07-04 18:10:48.573348 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.36s 2025-07-04 18:10:48.573359 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.90s 2025-07-04 18:10:48.573369 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.53s 2025-07-04 18:10:48.573381 | orchestrator | 2025-07-04 18:10:48.573391 | orchestrator | 2025-07-04 18:10:48.573402 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-04 18:10:48.573438 | orchestrator | 2025-07-04 18:10:48.573449 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-04 18:10:48.573460 | orchestrator | Friday 04 July 2025 18:09:35 +0000 (0:00:00.235) 0:00:00.235 *********** 2025-07-04 18:10:48.573471 | orchestrator | ok: [testbed-manager] 2025-07-04 18:10:48.573482 | orchestrator | 2025-07-04 18:10:48.573493 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-04 18:10:48.573504 | orchestrator | Friday 04 July 2025 18:09:36 +0000 (0:00:00.616) 0:00:00.851 *********** 2025-07-04 18:10:48.573514 | orchestrator | ok: [testbed-manager] 2025-07-04 18:10:48.573525 | orchestrator | 2025-07-04 18:10:48.573550 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-04 18:10:48.573561 | orchestrator | Friday 04 July 2025 18:09:37 +0000 (0:00:00.808) 0:00:01.660 *********** 2025-07-04 18:10:48.573572 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-04 18:10:48.573582 | orchestrator | 2025-07-04 18:10:48.573593 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-04 18:10:48.573603 | orchestrator | Friday 04 July 2025 18:09:37 +0000 (0:00:00.769) 0:00:02.430 *********** 2025-07-04 18:10:48.573614 | orchestrator | changed: [testbed-manager] 2025-07-04 18:10:48.573625 | orchestrator | 2025-07-04 18:10:48.573635 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-04 18:10:48.573646 | orchestrator | Friday 04 July 2025 18:09:39 +0000 (0:00:01.317) 0:00:03.747 *********** 2025-07-04 18:10:48.573715 | orchestrator | changed: [testbed-manager] 2025-07-04 18:10:48.573747 | orchestrator | 2025-07-04 18:10:48.573758 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-04 18:10:48.573769 | orchestrator | Friday 04 July 2025 18:09:39 +0000 (0:00:00.751) 0:00:04.499 *********** 2025-07-04 18:10:48.573779 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-04 18:10:48.573790 | orchestrator | 2025-07-04 18:10:48.573801 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-04 18:10:48.573812 | orchestrator | Friday 04 July 2025 18:09:41 +0000 (0:00:01.521) 0:00:06.020 *********** 2025-07-04 18:10:48.573823 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-04 18:10:48.573833 | orchestrator | 2025-07-04 18:10:48.573844 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-04 18:10:48.573854 | orchestrator | Friday 04 July 2025 18:09:42 +0000 (0:00:00.899) 0:00:06.919 *********** 2025-07-04 18:10:48.573865 | orchestrator | ok: [testbed-manager] 2025-07-04 18:10:48.573876 | orchestrator | 2025-07-04 18:10:48.573886 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-04 18:10:48.573897 | orchestrator | Friday 04 July 2025 18:09:42 +0000 (0:00:00.441) 0:00:07.360 *********** 2025-07-04 18:10:48.573907 | orchestrator | ok: [testbed-manager] 2025-07-04 18:10:48.573918 | orchestrator | 2025-07-04 18:10:48.573928 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:10:48.573939 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:10:48.573950 | orchestrator | 2025-07-04 18:10:48.573961 | orchestrator | 2025-07-04 18:10:48.573972 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:10:48.573982 | orchestrator | Friday 04 July 2025 18:09:43 +0000 (0:00:00.329) 0:00:07.690 *********** 2025-07-04 18:10:48.573993 | orchestrator | =============================================================================== 2025-07-04 18:10:48.574004 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.52s 2025-07-04 18:10:48.574117 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.32s 2025-07-04 18:10:48.574136 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.90s 2025-07-04 18:10:48.574160 | orchestrator | Create .kube directory -------------------------------------------------- 0.81s 2025-07-04 18:10:48.574172 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.77s 2025-07-04 18:10:48.574194 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.75s 2025-07-04 18:10:48.574205 | orchestrator | Get home directory of operator user ------------------------------------- 0.62s 2025-07-04 18:10:48.574216 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.44s 2025-07-04 18:10:48.574227 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.33s 2025-07-04 18:10:48.574242 | orchestrator | 2025-07-04 18:10:48.574260 | orchestrator | 2025-07-04 18:10:48.574278 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-07-04 18:10:48.574296 | orchestrator | 2025-07-04 18:10:48.574313 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-04 18:10:48.574330 | orchestrator | Friday 04 July 2025 18:08:24 +0000 (0:00:00.106) 0:00:00.106 *********** 2025-07-04 18:10:48.574349 | orchestrator | ok: [localhost] => { 2025-07-04 18:10:48.574368 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-07-04 18:10:48.574387 | orchestrator | } 2025-07-04 18:10:48.574810 | orchestrator | 2025-07-04 18:10:48.575064 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-07-04 18:10:48.575081 | orchestrator | Friday 04 July 2025 18:08:24 +0000 (0:00:00.044) 0:00:00.151 *********** 2025-07-04 18:10:48.575093 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-07-04 18:10:48.575106 | orchestrator | ...ignoring 2025-07-04 18:10:48.575117 | orchestrator | 2025-07-04 18:10:48.575128 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-07-04 18:10:48.575139 | orchestrator | Friday 04 July 2025 18:08:28 +0000 (0:00:03.147) 0:00:03.298 *********** 2025-07-04 18:10:48.575150 | orchestrator | skipping: [localhost] 2025-07-04 18:10:48.575233 | orchestrator | 2025-07-04 18:10:48.575248 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-07-04 18:10:48.575260 | orchestrator | Friday 04 July 2025 18:08:28 +0000 (0:00:00.068) 0:00:03.367 *********** 2025-07-04 18:10:48.575272 | orchestrator | ok: [localhost] 2025-07-04 18:10:48.575285 | orchestrator | 2025-07-04 18:10:48.575297 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:10:48.575310 | orchestrator | 2025-07-04 18:10:48.575322 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:10:48.575349 | orchestrator | Friday 04 July 2025 18:08:28 +0000 (0:00:00.224) 0:00:03.592 *********** 2025-07-04 18:10:48.575362 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:10:48.575375 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:10:48.575387 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:10:48.575399 | orchestrator | 2025-07-04 18:10:48.575424 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:10:48.575436 | orchestrator | Friday 04 July 2025 18:08:28 +0000 (0:00:00.352) 0:00:03.945 *********** 2025-07-04 18:10:48.575449 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-07-04 18:10:48.575462 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-07-04 18:10:48.575475 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-07-04 18:10:48.575486 | orchestrator | 2025-07-04 18:10:48.575510 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-07-04 18:10:48.575521 | orchestrator | 2025-07-04 18:10:48.575566 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-04 18:10:48.575577 | orchestrator | Friday 04 July 2025 18:08:29 +0000 (0:00:00.717) 0:00:04.662 *********** 2025-07-04 18:10:48.575588 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:10:48.575599 | orchestrator | 2025-07-04 18:10:48.575610 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-04 18:10:48.575621 | orchestrator | Friday 04 July 2025 18:08:30 +0000 (0:00:01.018) 0:00:05.681 *********** 2025-07-04 18:10:48.575644 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:10:48.575655 | orchestrator | 2025-07-04 18:10:48.575666 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-07-04 18:10:48.575677 | orchestrator | Friday 04 July 2025 18:08:31 +0000 (0:00:01.251) 0:00:06.933 *********** 2025-07-04 18:10:48.575688 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:10:48.575698 | orchestrator | 2025-07-04 18:10:48.575709 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-07-04 18:10:48.575720 | orchestrator | Friday 04 July 2025 18:08:32 +0000 (0:00:00.461) 0:00:07.394 *********** 2025-07-04 18:10:48.575751 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:10:48.575762 | orchestrator | 2025-07-04 18:10:48.575772 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-07-04 18:10:48.575783 | orchestrator | Friday 04 July 2025 18:08:32 +0000 (0:00:00.532) 0:00:07.926 *********** 2025-07-04 18:10:48.575794 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:10:48.575804 | orchestrator | 2025-07-04 18:10:48.575815 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-07-04 18:10:48.575826 | orchestrator | Friday 04 July 2025 18:08:33 +0000 (0:00:00.397) 0:00:08.323 *********** 2025-07-04 18:10:48.575836 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:10:48.575847 | orchestrator | 2025-07-04 18:10:48.575858 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-04 18:10:48.575868 | orchestrator | Friday 04 July 2025 18:08:33 +0000 (0:00:00.624) 0:00:08.948 *********** 2025-07-04 18:10:48.575879 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-07-04 18:10:48.575890 | orchestrator | 2025-07-04 18:10:48.575901 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-04 18:10:48.575923 | orchestrator | Friday 04 July 2025 18:08:34 +0000 (0:00:01.042) 0:00:09.990 *********** 2025-07-04 18:10:48.575934 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:10:48.575945 | orchestrator | 2025-07-04 18:10:48.575956 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-07-04 18:10:48.575966 | orchestrator | Friday 04 July 2025 18:08:35 +0000 (0:00:00.896) 0:00:10.886 *********** 2025-07-04 18:10:48.575977 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:10:48.575987 | orchestrator | 2025-07-04 18:10:48.575998 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-07-04 18:10:48.576009 | orchestrator | Friday 04 July 2025 18:08:36 +0000 (0:00:00.642) 0:00:11.529 *********** 2025-07-04 18:10:48.576019 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:10:48.576030 | orchestrator | 2025-07-04 18:10:48.576041 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-07-04 18:10:48.576052 | orchestrator | Friday 04 July 2025 18:08:36 +0000 (0:00:00.357) 0:00:11.886 *********** 2025-07-04 18:10:48.576070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-04 18:10:48.576110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-04 18:10:48.576124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-04 18:10:48.576136 | orchestrator | 2025-07-04 18:10:48.576147 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-07-04 18:10:48.576158 | orchestrator | Friday 04 July 2025 18:08:37 +0000 (0:00:01.156) 0:00:13.043 *********** 2025-07-04 18:10:48.576179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-04 18:10:48.576192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-04 18:10:48.576213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-04 18:10:48.576224 | orchestrator | 2025-07-04 18:10:48.576235 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-07-04 18:10:48.576246 | orchestrator | Friday 04 July 2025 18:08:39 +0000 (0:00:01.820) 0:00:14.863 *********** 2025-07-04 18:10:48.576257 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-04 18:10:48.576268 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-04 18:10:48.576352 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-04 18:10:48.576398 | orchestrator | 2025-07-04 18:10:48.576410 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-07-04 18:10:48.576421 | orchestrator | Friday 04 July 2025 18:08:41 +0000 (0:00:01.815) 0:00:16.679 *********** 2025-07-04 18:10:48.576431 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-04 18:10:48.576442 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-04 18:10:48.576453 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-04 18:10:48.576464 | orchestrator | 2025-07-04 18:10:48.576482 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-07-04 18:10:48.576493 | orchestrator | Friday 04 July 2025 18:08:43 +0000 (0:00:02.205) 0:00:18.884 *********** 2025-07-04 18:10:48.576504 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-04 18:10:48.576514 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-04 18:10:48.576525 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-04 18:10:48.576535 | orchestrator | 2025-07-04 18:10:48.576546 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-07-04 18:10:48.576557 | orchestrator | Friday 04 July 2025 18:08:45 +0000 (0:00:01.877) 0:00:20.762 *********** 2025-07-04 18:10:48.576567 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-04 18:10:48.576578 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-04 18:10:48.576589 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-04 18:10:48.576607 | orchestrator | 2025-07-04 18:10:48.576618 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-07-04 18:10:48.576629 | orchestrator | Friday 04 July 2025 18:08:47 +0000 (0:00:02.186) 0:00:22.948 *********** 2025-07-04 18:10:48.576639 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-04 18:10:48.576650 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-04 18:10:48.576661 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-04 18:10:48.576672 | orchestrator | 2025-07-04 18:10:48.576682 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-07-04 18:10:48.576693 | orchestrator | Friday 04 July 2025 18:08:49 +0000 (0:00:01.678) 0:00:24.627 *********** 2025-07-04 18:10:48.576703 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-04 18:10:48.576714 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-04 18:10:48.576741 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-04 18:10:48.576752 | orchestrator | 2025-07-04 18:10:48.576763 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-04 18:10:48.576774 | orchestrator | Friday 04 July 2025 18:08:50 +0000 (0:00:01.353) 0:00:25.981 *********** 2025-07-04 18:10:48.576789 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:10:48.576801 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:10:48.576811 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:10:48.576822 | orchestrator | 2025-07-04 18:10:48.576833 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-07-04 18:10:48.576843 | orchestrator | Friday 04 July 2025 18:08:51 +0000 (0:00:00.360) 0:00:26.341 *********** 2025-07-04 18:10:48.576855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-04 18:10:48.576878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-04 18:10:48.576912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-04 18:10:48.576936 | orchestrator | 2025-07-04 18:10:48.576947 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-07-04 18:10:48.576958 | orchestrator | Friday 04 July 2025 18:08:52 +0000 (0:00:01.447) 0:00:27.789 *********** 2025-07-04 18:10:48.576969 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:10:48.576979 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:10:48.576990 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:10:48.577000 | orchestrator | 2025-07-04 18:10:48.577011 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-07-04 18:10:48.577022 | orchestrator | Friday 04 July 2025 18:08:53 +0000 (0:00:01.037) 0:00:28.827 *********** 2025-07-04 18:10:48.577032 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:10:48.577043 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:10:48.577053 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:10:48.577064 | orchestrator | 2025-07-04 18:10:48.577075 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-07-04 18:10:48.577090 | orchestrator | Friday 04 July 2025 18:09:04 +0000 (0:00:10.557) 0:00:39.384 *********** 2025-07-04 18:10:48.577101 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:10:48.577112 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:10:48.577122 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:10:48.577133 | orchestrator | 2025-07-04 18:10:48.577144 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-04 18:10:48.577155 | orchestrator | 2025-07-04 18:10:48.577165 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-04 18:10:48.577176 | orchestrator | Friday 04 July 2025 18:09:04 +0000 (0:00:00.675) 0:00:40.059 *********** 2025-07-04 18:10:48.577186 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:10:48.577197 | orchestrator | 2025-07-04 18:10:48.577208 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-04 18:10:48.577218 | orchestrator | Friday 04 July 2025 18:09:05 +0000 (0:00:00.674) 0:00:40.734 *********** 2025-07-04 18:10:48.577229 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:10:48.577240 | orchestrator | 2025-07-04 18:10:48.577250 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-04 18:10:48.577261 | orchestrator | Friday 04 July 2025 18:09:05 +0000 (0:00:00.221) 0:00:40.955 *********** 2025-07-04 18:10:48.577272 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:10:48.577282 | orchestrator | 2025-07-04 18:10:48.577293 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-04 18:10:48.577304 | orchestrator | Friday 04 July 2025 18:09:07 +0000 (0:00:01.757) 0:00:42.713 *********** 2025-07-04 18:10:48.577314 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:10:48.577325 | orchestrator | 2025-07-04 18:10:48.577335 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-04 18:10:48.577353 | orchestrator | 2025-07-04 18:10:48.577364 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-04 18:10:48.577375 | orchestrator | Friday 04 July 2025 18:10:05 +0000 (0:00:58.031) 0:01:40.744 *********** 2025-07-04 18:10:48.577385 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:10:48.577396 | orchestrator | 2025-07-04 18:10:48.577407 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-04 18:10:48.577418 | orchestrator | Friday 04 July 2025 18:10:06 +0000 (0:00:00.621) 0:01:41.366 *********** 2025-07-04 18:10:48.577428 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:10:48.577439 | orchestrator | 2025-07-04 18:10:48.577449 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-04 18:10:48.577460 | orchestrator | Friday 04 July 2025 18:10:06 +0000 (0:00:00.557) 0:01:41.923 *********** 2025-07-04 18:10:48.577471 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:10:48.577481 | orchestrator | 2025-07-04 18:10:48.577492 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-04 18:10:48.577503 | orchestrator | Friday 04 July 2025 18:10:13 +0000 (0:00:06.712) 0:01:48.636 *********** 2025-07-04 18:10:48.577513 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:10:48.577524 | orchestrator | 2025-07-04 18:10:48.577535 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-04 18:10:48.577545 | orchestrator | 2025-07-04 18:10:48.577556 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-04 18:10:48.577573 | orchestrator | Friday 04 July 2025 18:10:24 +0000 (0:00:11.152) 0:01:59.788 *********** 2025-07-04 18:10:48.577584 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:10:48.577595 | orchestrator | 2025-07-04 18:10:48.577606 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-04 18:10:48.577616 | orchestrator | Friday 04 July 2025 18:10:25 +0000 (0:00:00.729) 0:02:00.518 *********** 2025-07-04 18:10:48.577627 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:10:48.577637 | orchestrator | 2025-07-04 18:10:48.577648 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-04 18:10:48.577658 | orchestrator | Friday 04 July 2025 18:10:25 +0000 (0:00:00.377) 0:02:00.895 *********** 2025-07-04 18:10:48.577669 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:10:48.577679 | orchestrator | 2025-07-04 18:10:48.577690 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-04 18:10:48.577701 | orchestrator | Friday 04 July 2025 18:10:27 +0000 (0:00:01.833) 0:02:02.729 *********** 2025-07-04 18:10:48.577711 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:10:48.577721 | orchestrator | 2025-07-04 18:10:48.577751 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-07-04 18:10:48.577762 | orchestrator | 2025-07-04 18:10:48.577773 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-07-04 18:10:48.577783 | orchestrator | Friday 04 July 2025 18:10:42 +0000 (0:00:15.222) 0:02:17.951 *********** 2025-07-04 18:10:48.577794 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:10:48.577805 | orchestrator | 2025-07-04 18:10:48.577816 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-07-04 18:10:48.577826 | orchestrator | Friday 04 July 2025 18:10:43 +0000 (0:00:00.568) 0:02:18.520 *********** 2025-07-04 18:10:48.577837 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-04 18:10:48.577847 | orchestrator | enable_outward_rabbitmq_True 2025-07-04 18:10:48.577858 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-04 18:10:48.577869 | orchestrator | outward_rabbitmq_restart 2025-07-04 18:10:48.577933 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:10:48.577947 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:10:48.577965 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:10:48.577982 | orchestrator | 2025-07-04 18:10:48.578000 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-07-04 18:10:48.578093 | orchestrator | skipping: no hosts matched 2025-07-04 18:10:48.578109 | orchestrator | 2025-07-04 18:10:48.578119 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-07-04 18:10:48.578130 | orchestrator | skipping: no hosts matched 2025-07-04 18:10:48.578141 | orchestrator | 2025-07-04 18:10:48.578151 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-07-04 18:10:48.578162 | orchestrator | skipping: no hosts matched 2025-07-04 18:10:48.578173 | orchestrator | 2025-07-04 18:10:48.578183 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:10:48.578200 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-04 18:10:48.578273 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-04 18:10:48.578286 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:10:48.578297 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:10:48.578308 | orchestrator | 2025-07-04 18:10:48.578319 | orchestrator | 2025-07-04 18:10:48.578329 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:10:48.578340 | orchestrator | Friday 04 July 2025 18:10:45 +0000 (0:00:02.295) 0:02:20.816 *********** 2025-07-04 18:10:48.578350 | orchestrator | =============================================================================== 2025-07-04 18:10:48.578361 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.41s 2025-07-04 18:10:48.578372 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------ 10.56s 2025-07-04 18:10:48.578383 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.30s 2025-07-04 18:10:48.578393 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.15s 2025-07-04 18:10:48.578404 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.30s 2025-07-04 18:10:48.578415 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.21s 2025-07-04 18:10:48.578425 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.19s 2025-07-04 18:10:48.578436 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.03s 2025-07-04 18:10:48.578447 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.88s 2025-07-04 18:10:48.578457 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.82s 2025-07-04 18:10:48.578468 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.82s 2025-07-04 18:10:48.578478 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.68s 2025-07-04 18:10:48.578489 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.45s 2025-07-04 18:10:48.578500 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.35s 2025-07-04 18:10:48.578510 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.25s 2025-07-04 18:10:48.578530 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.16s 2025-07-04 18:10:48.578541 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.16s 2025-07-04 18:10:48.578552 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.04s 2025-07-04 18:10:48.578563 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.04s 2025-07-04 18:10:48.578574 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.02s 2025-07-04 18:10:48.578585 | orchestrator | 2025-07-04 18:10:48 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:48.578604 | orchestrator | 2025-07-04 18:10:48 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:48.578615 | orchestrator | 2025-07-04 18:10:48 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:51.613913 | orchestrator | 2025-07-04 18:10:51 | INFO  | Task c24ea607-8fb2-4caa-8cdb-dbeb4a8465dc is in state STARTED 2025-07-04 18:10:51.614389 | orchestrator | 2025-07-04 18:10:51 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:51.615188 | orchestrator | 2025-07-04 18:10:51 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:51.622667 | orchestrator | 2025-07-04 18:10:51 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:51.622779 | orchestrator | 2025-07-04 18:10:51 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:54.649037 | orchestrator | 2025-07-04 18:10:54 | INFO  | Task c24ea607-8fb2-4caa-8cdb-dbeb4a8465dc is in state STARTED 2025-07-04 18:10:54.650243 | orchestrator | 2025-07-04 18:10:54 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:54.651564 | orchestrator | 2025-07-04 18:10:54 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:54.652821 | orchestrator | 2025-07-04 18:10:54 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:54.653632 | orchestrator | 2025-07-04 18:10:54 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:10:57.698634 | orchestrator | 2025-07-04 18:10:57 | INFO  | Task c24ea607-8fb2-4caa-8cdb-dbeb4a8465dc is in state STARTED 2025-07-04 18:10:57.699769 | orchestrator | 2025-07-04 18:10:57 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:10:57.699899 | orchestrator | 2025-07-04 18:10:57 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:10:57.701936 | orchestrator | 2025-07-04 18:10:57 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:10:57.702169 | orchestrator | 2025-07-04 18:10:57 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:00.744343 | orchestrator | 2025-07-04 18:11:00 | INFO  | Task c24ea607-8fb2-4caa-8cdb-dbeb4a8465dc is in state SUCCESS 2025-07-04 18:11:00.747559 | orchestrator | 2025-07-04 18:11:00 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:00.748757 | orchestrator | 2025-07-04 18:11:00 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:00.748803 | orchestrator | 2025-07-04 18:11:00 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:00.748825 | orchestrator | 2025-07-04 18:11:00 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:03.796042 | orchestrator | 2025-07-04 18:11:03 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:03.797220 | orchestrator | 2025-07-04 18:11:03 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:03.798849 | orchestrator | 2025-07-04 18:11:03 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:03.798895 | orchestrator | 2025-07-04 18:11:03 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:06.850275 | orchestrator | 2025-07-04 18:11:06 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:06.851758 | orchestrator | 2025-07-04 18:11:06 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:06.853436 | orchestrator | 2025-07-04 18:11:06 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:06.853546 | orchestrator | 2025-07-04 18:11:06 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:09.895253 | orchestrator | 2025-07-04 18:11:09 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:09.898974 | orchestrator | 2025-07-04 18:11:09 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:09.902006 | orchestrator | 2025-07-04 18:11:09 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:09.902483 | orchestrator | 2025-07-04 18:11:09 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:12.948345 | orchestrator | 2025-07-04 18:11:12 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:12.950352 | orchestrator | 2025-07-04 18:11:12 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:12.952908 | orchestrator | 2025-07-04 18:11:12 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:12.952937 | orchestrator | 2025-07-04 18:11:12 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:15.997478 | orchestrator | 2025-07-04 18:11:15 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:15.999557 | orchestrator | 2025-07-04 18:11:15 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:16.002421 | orchestrator | 2025-07-04 18:11:15 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:16.002920 | orchestrator | 2025-07-04 18:11:16 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:19.042377 | orchestrator | 2025-07-04 18:11:19 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:19.043267 | orchestrator | 2025-07-04 18:11:19 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:19.043467 | orchestrator | 2025-07-04 18:11:19 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:19.043490 | orchestrator | 2025-07-04 18:11:19 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:22.080704 | orchestrator | 2025-07-04 18:11:22 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:22.082303 | orchestrator | 2025-07-04 18:11:22 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:22.084877 | orchestrator | 2025-07-04 18:11:22 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:22.084958 | orchestrator | 2025-07-04 18:11:22 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:25.131057 | orchestrator | 2025-07-04 18:11:25 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:25.136973 | orchestrator | 2025-07-04 18:11:25 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:25.139127 | orchestrator | 2025-07-04 18:11:25 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:25.139584 | orchestrator | 2025-07-04 18:11:25 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:28.200407 | orchestrator | 2025-07-04 18:11:28 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:28.204072 | orchestrator | 2025-07-04 18:11:28 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:28.204156 | orchestrator | 2025-07-04 18:11:28 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:28.204224 | orchestrator | 2025-07-04 18:11:28 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:31.255495 | orchestrator | 2025-07-04 18:11:31 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:31.260477 | orchestrator | 2025-07-04 18:11:31 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:31.260575 | orchestrator | 2025-07-04 18:11:31 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:31.260590 | orchestrator | 2025-07-04 18:11:31 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:34.294577 | orchestrator | 2025-07-04 18:11:34 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:34.294887 | orchestrator | 2025-07-04 18:11:34 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:34.296298 | orchestrator | 2025-07-04 18:11:34 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:34.296397 | orchestrator | 2025-07-04 18:11:34 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:37.354333 | orchestrator | 2025-07-04 18:11:37 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:37.354616 | orchestrator | 2025-07-04 18:11:37 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:37.355729 | orchestrator | 2025-07-04 18:11:37 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:37.355772 | orchestrator | 2025-07-04 18:11:37 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:40.412892 | orchestrator | 2025-07-04 18:11:40 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:40.413117 | orchestrator | 2025-07-04 18:11:40 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:40.413856 | orchestrator | 2025-07-04 18:11:40 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:40.413884 | orchestrator | 2025-07-04 18:11:40 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:43.467383 | orchestrator | 2025-07-04 18:11:43 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:43.468744 | orchestrator | 2025-07-04 18:11:43 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:43.470456 | orchestrator | 2025-07-04 18:11:43 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:43.470552 | orchestrator | 2025-07-04 18:11:43 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:46.521315 | orchestrator | 2025-07-04 18:11:46 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:46.522916 | orchestrator | 2025-07-04 18:11:46 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:46.524741 | orchestrator | 2025-07-04 18:11:46 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:46.524769 | orchestrator | 2025-07-04 18:11:46 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:49.572186 | orchestrator | 2025-07-04 18:11:49 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:49.574120 | orchestrator | 2025-07-04 18:11:49 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:49.577067 | orchestrator | 2025-07-04 18:11:49 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:49.577415 | orchestrator | 2025-07-04 18:11:49 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:52.635531 | orchestrator | 2025-07-04 18:11:52 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:52.635859 | orchestrator | 2025-07-04 18:11:52 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:52.636770 | orchestrator | 2025-07-04 18:11:52 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:52.636860 | orchestrator | 2025-07-04 18:11:52 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:55.673584 | orchestrator | 2025-07-04 18:11:55 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:55.675011 | orchestrator | 2025-07-04 18:11:55 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:55.677731 | orchestrator | 2025-07-04 18:11:55 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:55.677794 | orchestrator | 2025-07-04 18:11:55 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:11:58.722978 | orchestrator | 2025-07-04 18:11:58 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:11:58.725432 | orchestrator | 2025-07-04 18:11:58 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:11:58.725466 | orchestrator | 2025-07-04 18:11:58 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:11:58.725480 | orchestrator | 2025-07-04 18:11:58 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:01.763317 | orchestrator | 2025-07-04 18:12:01 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:01.764334 | orchestrator | 2025-07-04 18:12:01 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:12:01.765394 | orchestrator | 2025-07-04 18:12:01 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:01.765423 | orchestrator | 2025-07-04 18:12:01 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:04.808251 | orchestrator | 2025-07-04 18:12:04 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:04.810573 | orchestrator | 2025-07-04 18:12:04 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state STARTED 2025-07-04 18:12:04.813150 | orchestrator | 2025-07-04 18:12:04 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:04.813306 | orchestrator | 2025-07-04 18:12:04 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:07.879799 | orchestrator | 2025-07-04 18:12:07 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:07.885572 | orchestrator | 2025-07-04 18:12:07.885705 | orchestrator | None 2025-07-04 18:12:07.885735 | orchestrator | 2025-07-04 18:12:07.885745 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:12:07.885754 | orchestrator | 2025-07-04 18:12:07.885761 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:12:07.885768 | orchestrator | Friday 04 July 2025 18:09:18 +0000 (0:00:00.594) 0:00:00.594 *********** 2025-07-04 18:12:07.885775 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:12:07.885784 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:12:07.885899 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:12:07.885918 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.885926 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.885942 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.885949 | orchestrator | 2025-07-04 18:12:07.885956 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:12:07.885964 | orchestrator | Friday 04 July 2025 18:09:20 +0000 (0:00:02.215) 0:00:02.810 *********** 2025-07-04 18:12:07.885995 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-07-04 18:12:07.886004 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-07-04 18:12:07.886012 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-07-04 18:12:07.886061 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-07-04 18:12:07.886070 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-07-04 18:12:07.886079 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-07-04 18:12:07.886087 | orchestrator | 2025-07-04 18:12:07.886095 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-07-04 18:12:07.886104 | orchestrator | 2025-07-04 18:12:07.886112 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-07-04 18:12:07.886120 | orchestrator | Friday 04 July 2025 18:09:22 +0000 (0:00:01.979) 0:00:04.789 *********** 2025-07-04 18:12:07.886130 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:12:07.886141 | orchestrator | 2025-07-04 18:12:07.886150 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-07-04 18:12:07.886159 | orchestrator | Friday 04 July 2025 18:09:25 +0000 (0:00:02.250) 0:00:07.040 *********** 2025-07-04 18:12:07.886185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886268 | orchestrator | 2025-07-04 18:12:07.886278 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-07-04 18:12:07.886287 | orchestrator | Friday 04 July 2025 18:09:27 +0000 (0:00:02.210) 0:00:09.251 *********** 2025-07-04 18:12:07.886296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886353 | orchestrator | 2025-07-04 18:12:07.886361 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-07-04 18:12:07.886369 | orchestrator | Friday 04 July 2025 18:09:29 +0000 (0:00:02.285) 0:00:11.536 *********** 2025-07-04 18:12:07.886377 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886455 | orchestrator | 2025-07-04 18:12:07.886464 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-07-04 18:12:07.886472 | orchestrator | Friday 04 July 2025 18:09:31 +0000 (0:00:01.449) 0:00:12.985 *********** 2025-07-04 18:12:07.886481 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886497 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886542 | orchestrator | 2025-07-04 18:12:07.886549 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-07-04 18:12:07.886557 | orchestrator | Friday 04 July 2025 18:09:33 +0000 (0:00:02.381) 0:00:15.367 *********** 2025-07-04 18:12:07.886565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.886654 | orchestrator | 2025-07-04 18:12:07.886662 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-07-04 18:12:07.886669 | orchestrator | Friday 04 July 2025 18:09:37 +0000 (0:00:03.968) 0:00:19.336 *********** 2025-07-04 18:12:07.886677 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:12:07.886686 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:12:07.886694 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:12:07.886701 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:12:07.886709 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:12:07.886717 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:12:07.886724 | orchestrator | 2025-07-04 18:12:07.886732 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-07-04 18:12:07.886738 | orchestrator | Friday 04 July 2025 18:09:40 +0000 (0:00:03.318) 0:00:22.654 *********** 2025-07-04 18:12:07.886745 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-07-04 18:12:07.886751 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-07-04 18:12:07.886757 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-07-04 18:12:07.886769 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-07-04 18:12:07.886775 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-07-04 18:12:07.886781 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-07-04 18:12:07.886787 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-04 18:12:07.886793 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-04 18:12:07.886799 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-04 18:12:07.886806 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-04 18:12:07.886813 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-04 18:12:07.886819 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-04 18:12:07.886826 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-04 18:12:07.886835 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-04 18:12:07.886841 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-04 18:12:07.886848 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-04 18:12:07.886854 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-04 18:12:07.886865 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-04 18:12:07.886871 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-04 18:12:07.886879 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-04 18:12:07.886885 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-04 18:12:07.886898 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-04 18:12:07.886905 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-04 18:12:07.886911 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-04 18:12:07.886918 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-04 18:12:07.886926 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-04 18:12:07.886933 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-04 18:12:07.886940 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-04 18:12:07.886947 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-04 18:12:07.886953 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-04 18:12:07.886961 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-04 18:12:07.886968 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-04 18:12:07.886974 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-04 18:12:07.886981 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-04 18:12:07.886989 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-04 18:12:07.886996 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-04 18:12:07.887002 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-04 18:12:07.887009 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-04 18:12:07.887017 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-04 18:12:07.887024 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-04 18:12:07.887036 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-07-04 18:12:07.887044 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-04 18:12:07.887051 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-04 18:12:07.887058 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-07-04 18:12:07.887065 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-07-04 18:12:07.887072 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-07-04 18:12:07.887080 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-04 18:12:07.887087 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-07-04 18:12:07.887094 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-07-04 18:12:07.887102 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-04 18:12:07.887115 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-04 18:12:07.887123 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-04 18:12:07.887133 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-04 18:12:07.887140 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-04 18:12:07.887147 | orchestrator | 2025-07-04 18:12:07.887154 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-04 18:12:07.887160 | orchestrator | Friday 04 July 2025 18:10:01 +0000 (0:00:21.123) 0:00:43.778 *********** 2025-07-04 18:12:07.887167 | orchestrator | 2025-07-04 18:12:07.887173 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-04 18:12:07.887180 | orchestrator | Friday 04 July 2025 18:10:01 +0000 (0:00:00.065) 0:00:43.843 *********** 2025-07-04 18:12:07.887187 | orchestrator | 2025-07-04 18:12:07.887193 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-04 18:12:07.887200 | orchestrator | Friday 04 July 2025 18:10:02 +0000 (0:00:00.066) 0:00:43.910 *********** 2025-07-04 18:12:07.887206 | orchestrator | 2025-07-04 18:12:07.887213 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-04 18:12:07.887220 | orchestrator | Friday 04 July 2025 18:10:02 +0000 (0:00:00.067) 0:00:43.977 *********** 2025-07-04 18:12:07.887226 | orchestrator | 2025-07-04 18:12:07.887233 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-04 18:12:07.887239 | orchestrator | Friday 04 July 2025 18:10:02 +0000 (0:00:00.065) 0:00:44.042 *********** 2025-07-04 18:12:07.887247 | orchestrator | 2025-07-04 18:12:07.887254 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-04 18:12:07.887261 | orchestrator | Friday 04 July 2025 18:10:02 +0000 (0:00:00.064) 0:00:44.107 *********** 2025-07-04 18:12:07.887268 | orchestrator | 2025-07-04 18:12:07.887276 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-07-04 18:12:07.887283 | orchestrator | Friday 04 July 2025 18:10:02 +0000 (0:00:00.068) 0:00:44.175 *********** 2025-07-04 18:12:07.887290 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:12:07.887296 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:12:07.887303 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.887310 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.887317 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.887324 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:12:07.887330 | orchestrator | 2025-07-04 18:12:07.887337 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-07-04 18:12:07.887344 | orchestrator | Friday 04 July 2025 18:10:04 +0000 (0:00:02.099) 0:00:46.275 *********** 2025-07-04 18:12:07.887351 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:12:07.887358 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:12:07.887364 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:12:07.887371 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:12:07.887378 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:12:07.887385 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:12:07.887392 | orchestrator | 2025-07-04 18:12:07.887400 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-07-04 18:12:07.887407 | orchestrator | 2025-07-04 18:12:07.887414 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-04 18:12:07.887421 | orchestrator | Friday 04 July 2025 18:10:48 +0000 (0:00:43.815) 0:01:30.090 *********** 2025-07-04 18:12:07.887429 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:12:07.887436 | orchestrator | 2025-07-04 18:12:07.887444 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-04 18:12:07.887457 | orchestrator | Friday 04 July 2025 18:10:48 +0000 (0:00:00.690) 0:01:30.780 *********** 2025-07-04 18:12:07.887465 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:12:07.887472 | orchestrator | 2025-07-04 18:12:07.887485 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-07-04 18:12:07.887493 | orchestrator | Friday 04 July 2025 18:10:49 +0000 (0:00:00.727) 0:01:31.508 *********** 2025-07-04 18:12:07.887500 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.887507 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.887513 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.887519 | orchestrator | 2025-07-04 18:12:07.887525 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-07-04 18:12:07.887531 | orchestrator | Friday 04 July 2025 18:10:50 +0000 (0:00:00.865) 0:01:32.373 *********** 2025-07-04 18:12:07.887538 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.887545 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.887552 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.887560 | orchestrator | 2025-07-04 18:12:07.887567 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-07-04 18:12:07.887574 | orchestrator | Friday 04 July 2025 18:10:50 +0000 (0:00:00.288) 0:01:32.662 *********** 2025-07-04 18:12:07.887581 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.887611 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.887618 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.887624 | orchestrator | 2025-07-04 18:12:07.887631 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-07-04 18:12:07.887638 | orchestrator | Friday 04 July 2025 18:10:51 +0000 (0:00:00.405) 0:01:33.067 *********** 2025-07-04 18:12:07.887644 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.887651 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.887657 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.887664 | orchestrator | 2025-07-04 18:12:07.887670 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-07-04 18:12:07.887676 | orchestrator | Friday 04 July 2025 18:10:51 +0000 (0:00:00.449) 0:01:33.516 *********** 2025-07-04 18:12:07.887683 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.887689 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.887695 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.887702 | orchestrator | 2025-07-04 18:12:07.887708 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-07-04 18:12:07.887714 | orchestrator | Friday 04 July 2025 18:10:51 +0000 (0:00:00.304) 0:01:33.820 *********** 2025-07-04 18:12:07.887721 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.887727 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.887734 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.887741 | orchestrator | 2025-07-04 18:12:07.887760 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-07-04 18:12:07.887767 | orchestrator | Friday 04 July 2025 18:10:52 +0000 (0:00:00.284) 0:01:34.104 *********** 2025-07-04 18:12:07.887774 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.887780 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.887786 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.887791 | orchestrator | 2025-07-04 18:12:07.887797 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-07-04 18:12:07.887803 | orchestrator | Friday 04 July 2025 18:10:52 +0000 (0:00:00.297) 0:01:34.402 *********** 2025-07-04 18:12:07.887809 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.887815 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.887821 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.887827 | orchestrator | 2025-07-04 18:12:07.887833 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-07-04 18:12:07.887840 | orchestrator | Friday 04 July 2025 18:10:52 +0000 (0:00:00.412) 0:01:34.815 *********** 2025-07-04 18:12:07.887846 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.887859 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.887866 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.887873 | orchestrator | 2025-07-04 18:12:07.887879 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-07-04 18:12:07.887885 | orchestrator | Friday 04 July 2025 18:10:53 +0000 (0:00:00.315) 0:01:35.131 *********** 2025-07-04 18:12:07.887892 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.887898 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.887905 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.887911 | orchestrator | 2025-07-04 18:12:07.887917 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-07-04 18:12:07.887923 | orchestrator | Friday 04 July 2025 18:10:53 +0000 (0:00:00.284) 0:01:35.415 *********** 2025-07-04 18:12:07.887930 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.887936 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.887942 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.887949 | orchestrator | 2025-07-04 18:12:07.887955 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-07-04 18:12:07.887961 | orchestrator | Friday 04 July 2025 18:10:53 +0000 (0:00:00.290) 0:01:35.706 *********** 2025-07-04 18:12:07.887967 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.887974 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.887981 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.887988 | orchestrator | 2025-07-04 18:12:07.887994 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-07-04 18:12:07.888001 | orchestrator | Friday 04 July 2025 18:10:54 +0000 (0:00:00.435) 0:01:36.141 *********** 2025-07-04 18:12:07.888008 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.888015 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.888022 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.888029 | orchestrator | 2025-07-04 18:12:07.888036 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-07-04 18:12:07.888043 | orchestrator | Friday 04 July 2025 18:10:54 +0000 (0:00:00.280) 0:01:36.422 *********** 2025-07-04 18:12:07.888050 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.888057 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.888063 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.888070 | orchestrator | 2025-07-04 18:12:07.888077 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-07-04 18:12:07.888084 | orchestrator | Friday 04 July 2025 18:10:54 +0000 (0:00:00.257) 0:01:36.679 *********** 2025-07-04 18:12:07.888091 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.888098 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.888105 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.888112 | orchestrator | 2025-07-04 18:12:07.888128 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-07-04 18:12:07.888135 | orchestrator | Friday 04 July 2025 18:10:55 +0000 (0:00:00.263) 0:01:36.943 *********** 2025-07-04 18:12:07.888141 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.888148 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.888154 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.888162 | orchestrator | 2025-07-04 18:12:07.888168 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-07-04 18:12:07.888175 | orchestrator | Friday 04 July 2025 18:10:55 +0000 (0:00:00.486) 0:01:37.429 *********** 2025-07-04 18:12:07.888181 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.888188 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.888194 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.888200 | orchestrator | 2025-07-04 18:12:07.888207 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-04 18:12:07.888213 | orchestrator | Friday 04 July 2025 18:10:55 +0000 (0:00:00.313) 0:01:37.743 *********** 2025-07-04 18:12:07.888221 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:12:07.888235 | orchestrator | 2025-07-04 18:12:07.888241 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-07-04 18:12:07.888248 | orchestrator | Friday 04 July 2025 18:10:56 +0000 (0:00:00.568) 0:01:38.311 *********** 2025-07-04 18:12:07.888256 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.888263 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.888270 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.888277 | orchestrator | 2025-07-04 18:12:07.888284 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-07-04 18:12:07.888290 | orchestrator | Friday 04 July 2025 18:10:57 +0000 (0:00:00.932) 0:01:39.244 *********** 2025-07-04 18:12:07.888297 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.888303 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.888310 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.888317 | orchestrator | 2025-07-04 18:12:07.888323 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-07-04 18:12:07.888330 | orchestrator | Friday 04 July 2025 18:10:58 +0000 (0:00:00.661) 0:01:39.905 *********** 2025-07-04 18:12:07.888336 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.888343 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.888355 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.888362 | orchestrator | 2025-07-04 18:12:07.888369 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-07-04 18:12:07.888375 | orchestrator | Friday 04 July 2025 18:10:58 +0000 (0:00:00.412) 0:01:40.318 *********** 2025-07-04 18:12:07.888382 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.888389 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.888395 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.888402 | orchestrator | 2025-07-04 18:12:07.888409 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-07-04 18:12:07.888415 | orchestrator | Friday 04 July 2025 18:10:59 +0000 (0:00:00.576) 0:01:40.894 *********** 2025-07-04 18:12:07.888422 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.888429 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.888436 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.888444 | orchestrator | 2025-07-04 18:12:07.888451 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-07-04 18:12:07.888458 | orchestrator | Friday 04 July 2025 18:10:59 +0000 (0:00:00.795) 0:01:41.690 *********** 2025-07-04 18:12:07.888465 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.888471 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.888478 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.888486 | orchestrator | 2025-07-04 18:12:07.888493 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-07-04 18:12:07.888500 | orchestrator | Friday 04 July 2025 18:11:00 +0000 (0:00:00.405) 0:01:42.095 *********** 2025-07-04 18:12:07.888507 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.888514 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.888521 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.888528 | orchestrator | 2025-07-04 18:12:07.888535 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-07-04 18:12:07.888541 | orchestrator | Friday 04 July 2025 18:11:00 +0000 (0:00:00.340) 0:01:42.435 *********** 2025-07-04 18:12:07.888548 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.888554 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.888561 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.888568 | orchestrator | 2025-07-04 18:12:07.888574 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-04 18:12:07.888625 | orchestrator | Friday 04 July 2025 18:11:00 +0000 (0:00:00.305) 0:01:42.741 *********** 2025-07-04 18:12:07.888637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.t2025-07-04 18:12:07 | INFO  | Task 6e1b8e93-1adc-4061-afa0-122aa0f01357 is in state SUCCESS 2025-07-04 18:12:07.888681 | orchestrator | ech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888768 | orchestrator | 2025-07-04 18:12:07.888775 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-04 18:12:07.888782 | orchestrator | Friday 04 July 2025 18:11:02 +0000 (0:00:01.439) 0:01:44.180 *********** 2025-07-04 18:12:07.888797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888871 | orchestrator | 2025-07-04 18:12:07.888878 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-04 18:12:07.888890 | orchestrator | Friday 04 July 2025 18:11:06 +0000 (0:00:03.982) 0:01:48.163 *********** 2025-07-04 18:12:07.888897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.888971 | orchestrator | 2025-07-04 18:12:07.888984 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-04 18:12:07.888991 | orchestrator | Friday 04 July 2025 18:11:08 +0000 (0:00:01.955) 0:01:50.118 *********** 2025-07-04 18:12:07.888998 | orchestrator | 2025-07-04 18:12:07.889005 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-04 18:12:07.889012 | orchestrator | Friday 04 July 2025 18:11:08 +0000 (0:00:00.079) 0:01:50.198 *********** 2025-07-04 18:12:07.889019 | orchestrator | 2025-07-04 18:12:07.889026 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-04 18:12:07.889033 | orchestrator | Friday 04 July 2025 18:11:08 +0000 (0:00:00.069) 0:01:50.268 *********** 2025-07-04 18:12:07.889040 | orchestrator | 2025-07-04 18:12:07.889047 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-04 18:12:07.889053 | orchestrator | Friday 04 July 2025 18:11:08 +0000 (0:00:00.065) 0:01:50.333 *********** 2025-07-04 18:12:07.889061 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:12:07.889068 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:12:07.889075 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:12:07.889082 | orchestrator | 2025-07-04 18:12:07.889088 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-04 18:12:07.889095 | orchestrator | Friday 04 July 2025 18:11:11 +0000 (0:00:02.701) 0:01:53.035 *********** 2025-07-04 18:12:07.889102 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:12:07.889109 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:12:07.889115 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:12:07.889122 | orchestrator | 2025-07-04 18:12:07.889128 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-04 18:12:07.889135 | orchestrator | Friday 04 July 2025 18:11:18 +0000 (0:00:07.607) 0:02:00.643 *********** 2025-07-04 18:12:07.889141 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:12:07.889148 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:12:07.889155 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:12:07.889162 | orchestrator | 2025-07-04 18:12:07.889168 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-04 18:12:07.889174 | orchestrator | Friday 04 July 2025 18:11:25 +0000 (0:00:06.668) 0:02:07.311 *********** 2025-07-04 18:12:07.889181 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.889188 | orchestrator | 2025-07-04 18:12:07.889194 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-04 18:12:07.889200 | orchestrator | Friday 04 July 2025 18:11:25 +0000 (0:00:00.123) 0:02:07.435 *********** 2025-07-04 18:12:07.889207 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.889220 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.889227 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.889234 | orchestrator | 2025-07-04 18:12:07.889241 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-04 18:12:07.889248 | orchestrator | Friday 04 July 2025 18:11:26 +0000 (0:00:00.832) 0:02:08.267 *********** 2025-07-04 18:12:07.889255 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.889262 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.889269 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:12:07.889275 | orchestrator | 2025-07-04 18:12:07.889282 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-04 18:12:07.889289 | orchestrator | Friday 04 July 2025 18:11:27 +0000 (0:00:00.860) 0:02:09.128 *********** 2025-07-04 18:12:07.889295 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.889301 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.889308 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.889315 | orchestrator | 2025-07-04 18:12:07.889322 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-04 18:12:07.889329 | orchestrator | Friday 04 July 2025 18:11:28 +0000 (0:00:00.754) 0:02:09.883 *********** 2025-07-04 18:12:07.889335 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.889342 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.889355 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:12:07.889362 | orchestrator | 2025-07-04 18:12:07.889370 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-04 18:12:07.889377 | orchestrator | Friday 04 July 2025 18:11:28 +0000 (0:00:00.643) 0:02:10.526 *********** 2025-07-04 18:12:07.889383 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.889389 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.889395 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.889402 | orchestrator | 2025-07-04 18:12:07.889408 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-04 18:12:07.889415 | orchestrator | Friday 04 July 2025 18:11:29 +0000 (0:00:00.706) 0:02:11.232 *********** 2025-07-04 18:12:07.889421 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.889428 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.889435 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.889441 | orchestrator | 2025-07-04 18:12:07.889448 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-07-04 18:12:07.889455 | orchestrator | Friday 04 July 2025 18:11:30 +0000 (0:00:01.444) 0:02:12.677 *********** 2025-07-04 18:12:07.889461 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.889473 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.889480 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.889486 | orchestrator | 2025-07-04 18:12:07.889493 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-04 18:12:07.889500 | orchestrator | Friday 04 July 2025 18:11:31 +0000 (0:00:00.354) 0:02:13.032 *********** 2025-07-04 18:12:07.889508 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889516 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889522 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889529 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889536 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889550 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889563 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889571 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889578 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889635 | orchestrator | 2025-07-04 18:12:07.889644 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-04 18:12:07.889651 | orchestrator | Friday 04 July 2025 18:11:32 +0000 (0:00:01.593) 0:02:14.626 *********** 2025-07-04 18:12:07.889663 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889670 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889677 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889684 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889721 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889741 | orchestrator | 2025-07-04 18:12:07.889748 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-04 18:12:07.889754 | orchestrator | Friday 04 July 2025 18:11:38 +0000 (0:00:05.254) 0:02:19.880 *********** 2025-07-04 18:12:07.889764 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889772 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889779 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889786 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889830 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:12:07.889837 | orchestrator | 2025-07-04 18:12:07.889843 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-04 18:12:07.889850 | orchestrator | Friday 04 July 2025 18:11:41 +0000 (0:00:03.482) 0:02:23.363 *********** 2025-07-04 18:12:07.889857 | orchestrator | 2025-07-04 18:12:07.889863 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-04 18:12:07.889869 | orchestrator | Friday 04 July 2025 18:11:41 +0000 (0:00:00.068) 0:02:23.431 *********** 2025-07-04 18:12:07.889876 | orchestrator | 2025-07-04 18:12:07.889882 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-04 18:12:07.889888 | orchestrator | Friday 04 July 2025 18:11:41 +0000 (0:00:00.064) 0:02:23.496 *********** 2025-07-04 18:12:07.889895 | orchestrator | 2025-07-04 18:12:07.889901 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-04 18:12:07.889907 | orchestrator | Friday 04 July 2025 18:11:41 +0000 (0:00:00.063) 0:02:23.559 *********** 2025-07-04 18:12:07.889914 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:12:07.889920 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:12:07.889927 | orchestrator | 2025-07-04 18:12:07.889937 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-04 18:12:07.889944 | orchestrator | Friday 04 July 2025 18:11:47 +0000 (0:00:06.163) 0:02:29.723 *********** 2025-07-04 18:12:07.889951 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:12:07.889957 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:12:07.889964 | orchestrator | 2025-07-04 18:12:07.889970 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-04 18:12:07.889978 | orchestrator | Friday 04 July 2025 18:11:54 +0000 (0:00:06.282) 0:02:36.007 *********** 2025-07-04 18:12:07.889984 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:12:07.889991 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:12:07.889996 | orchestrator | 2025-07-04 18:12:07.890002 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-04 18:12:07.890009 | orchestrator | Friday 04 July 2025 18:12:00 +0000 (0:00:06.328) 0:02:42.336 *********** 2025-07-04 18:12:07.890068 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:12:07.890077 | orchestrator | 2025-07-04 18:12:07.890084 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-04 18:12:07.890090 | orchestrator | Friday 04 July 2025 18:12:00 +0000 (0:00:00.137) 0:02:42.473 *********** 2025-07-04 18:12:07.890096 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.890112 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.890119 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.890126 | orchestrator | 2025-07-04 18:12:07.890133 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-04 18:12:07.890140 | orchestrator | Friday 04 July 2025 18:12:01 +0000 (0:00:01.082) 0:02:43.556 *********** 2025-07-04 18:12:07.890147 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.890154 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.890160 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:12:07.890166 | orchestrator | 2025-07-04 18:12:07.890173 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-04 18:12:07.890179 | orchestrator | Friday 04 July 2025 18:12:02 +0000 (0:00:00.680) 0:02:44.236 *********** 2025-07-04 18:12:07.890185 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.890192 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.890198 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.890205 | orchestrator | 2025-07-04 18:12:07.890211 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-04 18:12:07.890217 | orchestrator | Friday 04 July 2025 18:12:03 +0000 (0:00:00.835) 0:02:45.072 *********** 2025-07-04 18:12:07.890223 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:12:07.890230 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:12:07.890236 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:12:07.890242 | orchestrator | 2025-07-04 18:12:07.890249 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-04 18:12:07.890255 | orchestrator | Friday 04 July 2025 18:12:03 +0000 (0:00:00.661) 0:02:45.733 *********** 2025-07-04 18:12:07.890262 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.890268 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.890275 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.890281 | orchestrator | 2025-07-04 18:12:07.890288 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-04 18:12:07.890295 | orchestrator | Friday 04 July 2025 18:12:04 +0000 (0:00:01.094) 0:02:46.828 *********** 2025-07-04 18:12:07.890301 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:12:07.890308 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:12:07.890314 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:12:07.890321 | orchestrator | 2025-07-04 18:12:07.890327 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:12:07.890333 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-04 18:12:07.890351 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-04 18:12:07.890358 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-04 18:12:07.890365 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:12:07.890372 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:12:07.890378 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:12:07.890384 | orchestrator | 2025-07-04 18:12:07.890391 | orchestrator | 2025-07-04 18:12:07.890398 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:12:07.890405 | orchestrator | Friday 04 July 2025 18:12:06 +0000 (0:00:01.048) 0:02:47.877 *********** 2025-07-04 18:12:07.890412 | orchestrator | =============================================================================== 2025-07-04 18:12:07.890419 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 43.82s 2025-07-04 18:12:07.890425 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.12s 2025-07-04 18:12:07.890438 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.89s 2025-07-04 18:12:07.890444 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.00s 2025-07-04 18:12:07.890451 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.87s 2025-07-04 18:12:07.890458 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.25s 2025-07-04 18:12:07.890464 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.98s 2025-07-04 18:12:07.890476 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 3.97s 2025-07-04 18:12:07.890483 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.48s 2025-07-04 18:12:07.890490 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.32s 2025-07-04 18:12:07.890496 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.38s 2025-07-04 18:12:07.890502 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.29s 2025-07-04 18:12:07.890508 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.25s 2025-07-04 18:12:07.890515 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.22s 2025-07-04 18:12:07.890522 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.21s 2025-07-04 18:12:07.890529 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.10s 2025-07-04 18:12:07.890535 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.98s 2025-07-04 18:12:07.890542 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.95s 2025-07-04 18:12:07.890548 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.59s 2025-07-04 18:12:07.890554 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.45s 2025-07-04 18:12:07.890560 | orchestrator | 2025-07-04 18:12:07 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:07.890567 | orchestrator | 2025-07-04 18:12:07 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:10.925513 | orchestrator | 2025-07-04 18:12:10 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:10.928634 | orchestrator | 2025-07-04 18:12:10 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:10.928706 | orchestrator | 2025-07-04 18:12:10 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:13.968028 | orchestrator | 2025-07-04 18:12:13 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:13.969842 | orchestrator | 2025-07-04 18:12:13 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:13.969882 | orchestrator | 2025-07-04 18:12:13 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:17.022265 | orchestrator | 2025-07-04 18:12:17 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:17.023799 | orchestrator | 2025-07-04 18:12:17 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:17.024338 | orchestrator | 2025-07-04 18:12:17 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:20.069317 | orchestrator | 2025-07-04 18:12:20 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:20.071184 | orchestrator | 2025-07-04 18:12:20 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:20.071214 | orchestrator | 2025-07-04 18:12:20 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:23.103492 | orchestrator | 2025-07-04 18:12:23 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:23.105034 | orchestrator | 2025-07-04 18:12:23 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:23.105074 | orchestrator | 2025-07-04 18:12:23 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:26.148288 | orchestrator | 2025-07-04 18:12:26 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:26.150508 | orchestrator | 2025-07-04 18:12:26 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:26.153877 | orchestrator | 2025-07-04 18:12:26 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:29.191893 | orchestrator | 2025-07-04 18:12:29 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:29.192734 | orchestrator | 2025-07-04 18:12:29 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:29.192765 | orchestrator | 2025-07-04 18:12:29 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:32.249802 | orchestrator | 2025-07-04 18:12:32 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:32.251860 | orchestrator | 2025-07-04 18:12:32 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:32.251908 | orchestrator | 2025-07-04 18:12:32 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:35.304986 | orchestrator | 2025-07-04 18:12:35 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:35.306237 | orchestrator | 2025-07-04 18:12:35 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:35.306486 | orchestrator | 2025-07-04 18:12:35 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:38.366578 | orchestrator | 2025-07-04 18:12:38 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:38.369011 | orchestrator | 2025-07-04 18:12:38 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:38.369026 | orchestrator | 2025-07-04 18:12:38 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:41.408766 | orchestrator | 2025-07-04 18:12:41 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:41.409076 | orchestrator | 2025-07-04 18:12:41 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:41.409087 | orchestrator | 2025-07-04 18:12:41 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:44.459632 | orchestrator | 2025-07-04 18:12:44 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:44.460101 | orchestrator | 2025-07-04 18:12:44 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:44.460179 | orchestrator | 2025-07-04 18:12:44 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:47.496751 | orchestrator | 2025-07-04 18:12:47 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:47.497345 | orchestrator | 2025-07-04 18:12:47 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:47.497379 | orchestrator | 2025-07-04 18:12:47 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:50.544255 | orchestrator | 2025-07-04 18:12:50 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:50.544364 | orchestrator | 2025-07-04 18:12:50 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:50.544381 | orchestrator | 2025-07-04 18:12:50 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:53.587229 | orchestrator | 2025-07-04 18:12:53 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:53.589115 | orchestrator | 2025-07-04 18:12:53 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:53.589207 | orchestrator | 2025-07-04 18:12:53 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:56.653052 | orchestrator | 2025-07-04 18:12:56 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:56.653134 | orchestrator | 2025-07-04 18:12:56 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:56.653145 | orchestrator | 2025-07-04 18:12:56 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:12:59.691077 | orchestrator | 2025-07-04 18:12:59 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:12:59.691195 | orchestrator | 2025-07-04 18:12:59 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:12:59.691217 | orchestrator | 2025-07-04 18:12:59 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:02.733336 | orchestrator | 2025-07-04 18:13:02 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:02.734361 | orchestrator | 2025-07-04 18:13:02 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:02.734565 | orchestrator | 2025-07-04 18:13:02 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:05.774765 | orchestrator | 2025-07-04 18:13:05 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:05.775967 | orchestrator | 2025-07-04 18:13:05 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:05.775998 | orchestrator | 2025-07-04 18:13:05 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:08.829830 | orchestrator | 2025-07-04 18:13:08 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:08.830143 | orchestrator | 2025-07-04 18:13:08 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:08.830262 | orchestrator | 2025-07-04 18:13:08 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:11.873781 | orchestrator | 2025-07-04 18:13:11 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:11.876181 | orchestrator | 2025-07-04 18:13:11 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:11.876264 | orchestrator | 2025-07-04 18:13:11 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:14.928835 | orchestrator | 2025-07-04 18:13:14 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:14.928938 | orchestrator | 2025-07-04 18:13:14 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:14.928952 | orchestrator | 2025-07-04 18:13:14 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:17.966095 | orchestrator | 2025-07-04 18:13:17 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:17.968727 | orchestrator | 2025-07-04 18:13:17 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:17.968875 | orchestrator | 2025-07-04 18:13:17 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:21.021329 | orchestrator | 2025-07-04 18:13:21 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:21.023871 | orchestrator | 2025-07-04 18:13:21 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:21.023959 | orchestrator | 2025-07-04 18:13:21 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:24.080503 | orchestrator | 2025-07-04 18:13:24 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:24.081359 | orchestrator | 2025-07-04 18:13:24 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:24.081397 | orchestrator | 2025-07-04 18:13:24 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:27.134441 | orchestrator | 2025-07-04 18:13:27 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:27.138455 | orchestrator | 2025-07-04 18:13:27 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:27.138564 | orchestrator | 2025-07-04 18:13:27 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:30.195912 | orchestrator | 2025-07-04 18:13:30 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:30.199070 | orchestrator | 2025-07-04 18:13:30 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:30.199168 | orchestrator | 2025-07-04 18:13:30 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:33.241110 | orchestrator | 2025-07-04 18:13:33 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:33.243035 | orchestrator | 2025-07-04 18:13:33 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:33.243079 | orchestrator | 2025-07-04 18:13:33 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:36.295329 | orchestrator | 2025-07-04 18:13:36 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:36.296103 | orchestrator | 2025-07-04 18:13:36 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:36.296142 | orchestrator | 2025-07-04 18:13:36 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:39.361750 | orchestrator | 2025-07-04 18:13:39 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:39.363899 | orchestrator | 2025-07-04 18:13:39 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:39.364169 | orchestrator | 2025-07-04 18:13:39 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:42.408156 | orchestrator | 2025-07-04 18:13:42 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:42.409655 | orchestrator | 2025-07-04 18:13:42 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:42.410505 | orchestrator | 2025-07-04 18:13:42 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:45.466665 | orchestrator | 2025-07-04 18:13:45 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:45.467250 | orchestrator | 2025-07-04 18:13:45 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:45.467282 | orchestrator | 2025-07-04 18:13:45 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:48.519870 | orchestrator | 2025-07-04 18:13:48 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:48.522641 | orchestrator | 2025-07-04 18:13:48 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:48.522729 | orchestrator | 2025-07-04 18:13:48 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:51.574490 | orchestrator | 2025-07-04 18:13:51 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:51.576406 | orchestrator | 2025-07-04 18:13:51 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:51.577295 | orchestrator | 2025-07-04 18:13:51 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:54.634933 | orchestrator | 2025-07-04 18:13:54 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:54.635040 | orchestrator | 2025-07-04 18:13:54 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:54.635054 | orchestrator | 2025-07-04 18:13:54 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:13:57.679013 | orchestrator | 2025-07-04 18:13:57 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:13:57.680772 | orchestrator | 2025-07-04 18:13:57 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:13:57.680814 | orchestrator | 2025-07-04 18:13:57 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:00.714500 | orchestrator | 2025-07-04 18:14:00 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:14:00.715911 | orchestrator | 2025-07-04 18:14:00 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:00.716045 | orchestrator | 2025-07-04 18:14:00 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:03.769826 | orchestrator | 2025-07-04 18:14:03 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:14:03.770849 | orchestrator | 2025-07-04 18:14:03 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:03.770925 | orchestrator | 2025-07-04 18:14:03 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:06.819740 | orchestrator | 2025-07-04 18:14:06 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:14:06.820942 | orchestrator | 2025-07-04 18:14:06 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:06.821011 | orchestrator | 2025-07-04 18:14:06 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:09.867683 | orchestrator | 2025-07-04 18:14:09 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:14:09.870170 | orchestrator | 2025-07-04 18:14:09 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:09.870358 | orchestrator | 2025-07-04 18:14:09 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:12.914347 | orchestrator | 2025-07-04 18:14:12 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:14:12.916102 | orchestrator | 2025-07-04 18:14:12 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:12.916166 | orchestrator | 2025-07-04 18:14:12 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:15.957651 | orchestrator | 2025-07-04 18:14:15 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:14:15.959435 | orchestrator | 2025-07-04 18:14:15 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:15.959452 | orchestrator | 2025-07-04 18:14:15 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:19.015037 | orchestrator | 2025-07-04 18:14:19 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:14:19.017052 | orchestrator | 2025-07-04 18:14:19 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:19.017141 | orchestrator | 2025-07-04 18:14:19 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:22.065758 | orchestrator | 2025-07-04 18:14:22 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:14:22.066605 | orchestrator | 2025-07-04 18:14:22 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:22.066656 | orchestrator | 2025-07-04 18:14:22 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:25.109423 | orchestrator | 2025-07-04 18:14:25 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:14:25.109785 | orchestrator | 2025-07-04 18:14:25 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:25.109806 | orchestrator | 2025-07-04 18:14:25 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:28.156901 | orchestrator | 2025-07-04 18:14:28 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:14:28.158592 | orchestrator | 2025-07-04 18:14:28 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:28.158674 | orchestrator | 2025-07-04 18:14:28 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:31.209785 | orchestrator | 2025-07-04 18:14:31 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:14:31.211158 | orchestrator | 2025-07-04 18:14:31 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:31.211326 | orchestrator | 2025-07-04 18:14:31 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:34.253723 | orchestrator | 2025-07-04 18:14:34 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:14:34.256221 | orchestrator | 2025-07-04 18:14:34 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:34.256484 | orchestrator | 2025-07-04 18:14:34 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:37.309506 | orchestrator | 2025-07-04 18:14:37 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state STARTED 2025-07-04 18:14:37.312119 | orchestrator | 2025-07-04 18:14:37 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:37.312152 | orchestrator | 2025-07-04 18:14:37 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:40.371975 | orchestrator | 2025-07-04 18:14:40 | INFO  | Task a21bc798-738f-4a98-9b83-5e93eed95645 is in state SUCCESS 2025-07-04 18:14:40.372633 | orchestrator | 2025-07-04 18:14:40.374089 | orchestrator | 2025-07-04 18:14:40.374143 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:14:40.374157 | orchestrator | 2025-07-04 18:14:40.374169 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:14:40.374180 | orchestrator | Friday 04 July 2025 18:08:04 +0000 (0:00:00.412) 0:00:00.412 *********** 2025-07-04 18:14:40.374191 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.374204 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.374215 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.374287 | orchestrator | 2025-07-04 18:14:40.374301 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:14:40.374313 | orchestrator | Friday 04 July 2025 18:08:04 +0000 (0:00:00.472) 0:00:00.884 *********** 2025-07-04 18:14:40.374326 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-07-04 18:14:40.374337 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-07-04 18:14:40.374349 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-07-04 18:14:40.374397 | orchestrator | 2025-07-04 18:14:40.374417 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-07-04 18:14:40.374437 | orchestrator | 2025-07-04 18:14:40.374456 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-04 18:14:40.374509 | orchestrator | Friday 04 July 2025 18:08:05 +0000 (0:00:00.784) 0:00:01.668 *********** 2025-07-04 18:14:40.374531 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.374550 | orchestrator | 2025-07-04 18:14:40.374570 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-07-04 18:14:40.374591 | orchestrator | Friday 04 July 2025 18:08:06 +0000 (0:00:01.224) 0:00:02.892 *********** 2025-07-04 18:14:40.374612 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.374625 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.374636 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.374647 | orchestrator | 2025-07-04 18:14:40.374658 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-04 18:14:40.374669 | orchestrator | Friday 04 July 2025 18:08:07 +0000 (0:00:00.821) 0:00:03.714 *********** 2025-07-04 18:14:40.374679 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.374690 | orchestrator | 2025-07-04 18:14:40.374700 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-07-04 18:14:40.374711 | orchestrator | Friday 04 July 2025 18:08:08 +0000 (0:00:01.089) 0:00:04.803 *********** 2025-07-04 18:14:40.374776 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.374790 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.374801 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.374811 | orchestrator | 2025-07-04 18:14:40.374822 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-07-04 18:14:40.374833 | orchestrator | Friday 04 July 2025 18:08:09 +0000 (0:00:00.964) 0:00:05.768 *********** 2025-07-04 18:14:40.374844 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-04 18:14:40.374854 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-04 18:14:40.374865 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-04 18:14:40.374876 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-04 18:14:40.374886 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-04 18:14:40.374897 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-04 18:14:40.374952 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-04 18:14:40.374980 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-04 18:14:40.374992 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-04 18:14:40.375002 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-04 18:14:40.375013 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-04 18:14:40.375023 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-04 18:14:40.375034 | orchestrator | 2025-07-04 18:14:40.375044 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-04 18:14:40.375055 | orchestrator | Friday 04 July 2025 18:08:12 +0000 (0:00:02.700) 0:00:08.469 *********** 2025-07-04 18:14:40.375066 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-04 18:14:40.375077 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-04 18:14:40.375097 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-04 18:14:40.375116 | orchestrator | 2025-07-04 18:14:40.375135 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-04 18:14:40.375154 | orchestrator | Friday 04 July 2025 18:08:13 +0000 (0:00:00.817) 0:00:09.287 *********** 2025-07-04 18:14:40.375175 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-04 18:14:40.375209 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-04 18:14:40.375226 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-04 18:14:40.375237 | orchestrator | 2025-07-04 18:14:40.375248 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-04 18:14:40.375266 | orchestrator | Friday 04 July 2025 18:08:14 +0000 (0:00:01.526) 0:00:10.813 *********** 2025-07-04 18:14:40.375283 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-07-04 18:14:40.375303 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.375344 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-07-04 18:14:40.375436 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.375495 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-07-04 18:14:40.375509 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.375519 | orchestrator | 2025-07-04 18:14:40.375530 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-07-04 18:14:40.375541 | orchestrator | Friday 04 July 2025 18:08:15 +0000 (0:00:00.965) 0:00:11.779 *********** 2025-07-04 18:14:40.375556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.375576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.375588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.375708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.375854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.375916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.375934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-04 18:14:40.375951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-04 18:14:40.375968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-04 18:14:40.375985 | orchestrator | 2025-07-04 18:14:40.376001 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-07-04 18:14:40.376018 | orchestrator | Friday 04 July 2025 18:08:18 +0000 (0:00:02.138) 0:00:13.918 *********** 2025-07-04 18:14:40.376034 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.376050 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.376066 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.376081 | orchestrator | 2025-07-04 18:14:40.376097 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-07-04 18:14:40.376157 | orchestrator | Friday 04 July 2025 18:08:19 +0000 (0:00:01.165) 0:00:15.083 *********** 2025-07-04 18:14:40.376178 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-07-04 18:14:40.376195 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-07-04 18:14:40.376212 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-07-04 18:14:40.376288 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-07-04 18:14:40.376308 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-07-04 18:14:40.376325 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-07-04 18:14:40.376344 | orchestrator | 2025-07-04 18:14:40.376386 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-07-04 18:14:40.376532 | orchestrator | Friday 04 July 2025 18:08:21 +0000 (0:00:01.934) 0:00:17.018 *********** 2025-07-04 18:14:40.376544 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.376554 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.376572 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.376582 | orchestrator | 2025-07-04 18:14:40.376592 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-07-04 18:14:40.376602 | orchestrator | Friday 04 July 2025 18:08:23 +0000 (0:00:02.149) 0:00:19.168 *********** 2025-07-04 18:14:40.376611 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.376621 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.376631 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.376640 | orchestrator | 2025-07-04 18:14:40.376650 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-07-04 18:14:40.376659 | orchestrator | Friday 04 July 2025 18:08:24 +0000 (0:00:01.409) 0:00:20.577 *********** 2025-07-04 18:14:40.376670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.376696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.376707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.376719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__972b9ece33ee9ed34a1074742c399c9ff1d4bd20', '__omit_place_holder__972b9ece33ee9ed34a1074742c399c9ff1d4bd20'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-04 18:14:40.376730 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.376740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.376763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.376773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.376796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__972b9ece33ee9ed34a1074742c399c9ff1d4bd20', '__omit_place_holder__972b9ece33ee9ed34a1074742c399c9ff1d4bd20'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-04 18:14:40.376814 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.376831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.376848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.376865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.376893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__972b9ece33ee9ed34a1074742c399c9ff1d4bd20', '__omit_place_holder__972b9ece33ee9ed34a1074742c399c9ff1d4bd20'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-04 18:14:40.376910 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.376926 | orchestrator | 2025-07-04 18:14:40.376944 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-07-04 18:14:40.376962 | orchestrator | Friday 04 July 2025 18:08:25 +0000 (0:00:00.868) 0:00:21.446 *********** 2025-07-04 18:14:40.376979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.377000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.377011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.377056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.377067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.377085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__972b9ece33ee9ed34a1074742c399c9ff1d4bd20', '__omit_place_holder__972b9ece33ee9ed34a1074742c399c9ff1d4bd20'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-04 18:14:40.377205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.377219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.377237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__972b9ece33ee9ed34a1074742c399c9ff1d4bd20', '__omit_place_holder__972b9ece33ee9ed34a1074742c399c9ff1d4bd20'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-04 18:14:40.377248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.377258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.377276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__972b9ece33ee9ed34a1074742c399c9ff1d4bd20', '__omit_place_holder__972b9ece33ee9ed34a1074742c399c9ff1d4bd20'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-04 18:14:40.377286 | orchestrator | 2025-07-04 18:14:40.377296 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-07-04 18:14:40.377306 | orchestrator | Friday 04 July 2025 18:08:29 +0000 (0:00:04.130) 0:00:25.577 *********** 2025-07-04 18:14:40.377326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.377336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.377377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.377390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.377400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.377418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.377433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-04 18:14:40.377444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-04 18:14:40.377455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-04 18:14:40.377464 | orchestrator | 2025-07-04 18:14:40.377475 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-07-04 18:14:40.377493 | orchestrator | Friday 04 July 2025 18:08:33 +0000 (0:00:03.949) 0:00:29.526 *********** 2025-07-04 18:14:40.377508 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-04 18:14:40.380071 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-04 18:14:40.380135 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-04 18:14:40.380145 | orchestrator | 2025-07-04 18:14:40.380156 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-07-04 18:14:40.380166 | orchestrator | Friday 04 July 2025 18:08:35 +0000 (0:00:01.814) 0:00:31.341 *********** 2025-07-04 18:14:40.380176 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-04 18:14:40.380186 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-04 18:14:40.380195 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-04 18:14:40.380491 | orchestrator | 2025-07-04 18:14:40.380511 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-07-04 18:14:40.380521 | orchestrator | Friday 04 July 2025 18:08:38 +0000 (0:00:03.546) 0:00:34.888 *********** 2025-07-04 18:14:40.380530 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.380541 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.380550 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.380560 | orchestrator | 2025-07-04 18:14:40.380569 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-07-04 18:14:40.380579 | orchestrator | Friday 04 July 2025 18:08:39 +0000 (0:00:00.581) 0:00:35.470 *********** 2025-07-04 18:14:40.380589 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-04 18:14:40.380600 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-04 18:14:40.380610 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-04 18:14:40.380619 | orchestrator | 2025-07-04 18:14:40.380629 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-07-04 18:14:40.380639 | orchestrator | Friday 04 July 2025 18:08:42 +0000 (0:00:02.986) 0:00:38.456 *********** 2025-07-04 18:14:40.380648 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-04 18:14:40.380658 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-04 18:14:40.380670 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-04 18:14:40.380680 | orchestrator | 2025-07-04 18:14:40.380691 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-07-04 18:14:40.380703 | orchestrator | Friday 04 July 2025 18:08:45 +0000 (0:00:02.907) 0:00:41.364 *********** 2025-07-04 18:14:40.380714 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-07-04 18:14:40.380725 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-07-04 18:14:40.380736 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-07-04 18:14:40.380747 | orchestrator | 2025-07-04 18:14:40.380758 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-07-04 18:14:40.380769 | orchestrator | Friday 04 July 2025 18:08:47 +0000 (0:00:01.851) 0:00:43.215 *********** 2025-07-04 18:14:40.380780 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-07-04 18:14:40.380791 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-07-04 18:14:40.380809 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-07-04 18:14:40.380821 | orchestrator | 2025-07-04 18:14:40.380832 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-04 18:14:40.380843 | orchestrator | Friday 04 July 2025 18:08:49 +0000 (0:00:01.937) 0:00:45.152 *********** 2025-07-04 18:14:40.380896 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.380907 | orchestrator | 2025-07-04 18:14:40.380916 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-07-04 18:14:40.380924 | orchestrator | Friday 04 July 2025 18:08:49 +0000 (0:00:00.743) 0:00:45.896 *********** 2025-07-04 18:14:40.380934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.380965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.380975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.380983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.380992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.381004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.381013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-04 18:14:40.381028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-04 18:14:40.381043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-04 18:14:40.381051 | orchestrator | 2025-07-04 18:14:40.381059 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-07-04 18:14:40.381067 | orchestrator | Friday 04 July 2025 18:08:53 +0000 (0:00:03.310) 0:00:49.206 *********** 2025-07-04 18:14:40.381075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.381126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.381135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.381147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.381156 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.381170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.381185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.381194 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.381202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.381210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.381219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.381227 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.381235 | orchestrator | 2025-07-04 18:14:40.381243 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-07-04 18:14:40.381251 | orchestrator | Friday 04 July 2025 18:08:54 +0000 (0:00:01.445) 0:00:50.652 *********** 2025-07-04 18:14:40.381263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.381276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.381289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.381298 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.381306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.381314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.381322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.381330 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.381339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.381390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.381409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.381423 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.381437 | orchestrator | 2025-07-04 18:14:40.381445 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-04 18:14:40.381453 | orchestrator | Friday 04 July 2025 18:08:56 +0000 (0:00:01.788) 0:00:52.441 *********** 2025-07-04 18:14:40.381500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.381510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.381518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.381526 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.381534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.381564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.381574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.381582 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.381599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.381615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.381719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.381734 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.381748 | orchestrator | 2025-07-04 18:14:40.381762 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-04 18:14:40.381777 | orchestrator | Friday 04 July 2025 18:08:57 +0000 (0:00:01.020) 0:00:53.461 *********** 2025-07-04 18:14:40.381792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.381827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.381836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.381844 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.381853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.381870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.381879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.381887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.381901 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.381909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.381921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.381929 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.381937 | orchestrator | 2025-07-04 18:14:40.381945 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-04 18:14:40.381953 | orchestrator | Friday 04 July 2025 18:08:58 +0000 (0:00:00.918) 0:00:54.380 *********** 2025-07-04 18:14:40.381961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.381978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.381986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.382124 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.382134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.382150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.382163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.382172 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.382180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.382195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.382203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.382211 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.382219 | orchestrator | 2025-07-04 18:14:40.382227 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-07-04 18:14:40.382235 | orchestrator | Friday 04 July 2025 18:09:00 +0000 (0:00:01.631) 0:00:56.011 *********** 2025-07-04 18:14:40.382243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.382257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.382266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.382273 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.382286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.382294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.382309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.382317 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.382325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.382339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.382347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.382413 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.382425 | orchestrator | 2025-07-04 18:14:40.382433 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-07-04 18:14:40.382441 | orchestrator | Friday 04 July 2025 18:09:01 +0000 (0:00:00.927) 0:00:56.939 *********** 2025-07-04 18:14:40.382455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.382464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.382479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.382488 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.382496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.382545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.382555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.382563 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.382576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.382584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.382591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.382597 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.382604 | orchestrator | 2025-07-04 18:14:40.382611 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-07-04 18:14:40.382622 | orchestrator | Friday 04 July 2025 18:09:03 +0000 (0:00:02.325) 0:00:59.264 *********** 2025-07-04 18:14:40.382630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.382642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.382649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.382656 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.382663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.382673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.382681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.382688 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.382699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-04 18:14:40.382711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-04 18:14:40.382718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-04 18:14:40.382725 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.382732 | orchestrator | 2025-07-04 18:14:40.382739 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-07-04 18:14:40.382855 | orchestrator | Friday 04 July 2025 18:09:05 +0000 (0:00:01.924) 0:01:01.189 *********** 2025-07-04 18:14:40.382868 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-04 18:14:40.382879 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-04 18:14:40.382891 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-04 18:14:40.382901 | orchestrator | 2025-07-04 18:14:40.382911 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-07-04 18:14:40.382921 | orchestrator | Friday 04 July 2025 18:09:06 +0000 (0:00:01.591) 0:01:02.780 *********** 2025-07-04 18:14:40.382934 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-04 18:14:40.382946 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-04 18:14:40.382958 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-04 18:14:40.382970 | orchestrator | 2025-07-04 18:14:40.382981 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-07-04 18:14:40.382992 | orchestrator | Friday 04 July 2025 18:09:08 +0000 (0:00:01.605) 0:01:04.386 *********** 2025-07-04 18:14:40.383016 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-04 18:14:40.383027 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-04 18:14:40.383038 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-04 18:14:40.383048 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.383060 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-04 18:14:40.383071 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-04 18:14:40.383083 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.383095 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-04 18:14:40.383116 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.383124 | orchestrator | 2025-07-04 18:14:40.383131 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-07-04 18:14:40.383138 | orchestrator | Friday 04 July 2025 18:09:09 +0000 (0:00:01.262) 0:01:05.649 *********** 2025-07-04 18:14:40.383153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.383161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.383169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-04 18:14:40.383176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.383187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.383195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-04 18:14:40.383207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-04 18:14:40.383220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-04 18:14:40.383227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-04 18:14:40.383234 | orchestrator | 2025-07-04 18:14:40.383240 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-07-04 18:14:40.383247 | orchestrator | Friday 04 July 2025 18:09:13 +0000 (0:00:04.085) 0:01:09.734 *********** 2025-07-04 18:14:40.383254 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.383261 | orchestrator | 2025-07-04 18:14:40.383267 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-07-04 18:14:40.383274 | orchestrator | Friday 04 July 2025 18:09:15 +0000 (0:00:01.200) 0:01:10.934 *********** 2025-07-04 18:14:40.383283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-04 18:14:40.383290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.383303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-04 18:14:40.383353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.383378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-04 18:14:40.383410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.383440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383454 | orchestrator | 2025-07-04 18:14:40.383461 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-07-04 18:14:40.383468 | orchestrator | Friday 04 July 2025 18:09:23 +0000 (0:00:08.433) 0:01:19.367 *********** 2025-07-04 18:14:40.383475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-04 18:14:40.383520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.383536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383551 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.383564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-04 18:14:40.383571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.383579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383598 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.383608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-04 18:14:40.383616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.383627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383641 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.383648 | orchestrator | 2025-07-04 18:14:40.383655 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-07-04 18:14:40.383662 | orchestrator | Friday 04 July 2025 18:09:25 +0000 (0:00:01.830) 0:01:21.198 *********** 2025-07-04 18:14:40.383669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-04 18:14:40.383677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-04 18:14:40.383684 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.383696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-04 18:14:40.383703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-04 18:14:40.383710 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.383716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-04 18:14:40.383723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-04 18:14:40.383730 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.383737 | orchestrator | 2025-07-04 18:14:40.383744 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-07-04 18:14:40.383751 | orchestrator | Friday 04 July 2025 18:09:27 +0000 (0:00:01.916) 0:01:23.115 *********** 2025-07-04 18:14:40.383761 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.383768 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.383775 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.383781 | orchestrator | 2025-07-04 18:14:40.383788 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-07-04 18:14:40.383794 | orchestrator | Friday 04 July 2025 18:09:29 +0000 (0:00:02.084) 0:01:25.199 *********** 2025-07-04 18:14:40.383801 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.383808 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.383814 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.383821 | orchestrator | 2025-07-04 18:14:40.383827 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-07-04 18:14:40.383834 | orchestrator | Friday 04 July 2025 18:09:31 +0000 (0:00:02.221) 0:01:27.421 *********** 2025-07-04 18:14:40.383841 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.383847 | orchestrator | 2025-07-04 18:14:40.383854 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-07-04 18:14:40.383860 | orchestrator | Friday 04 July 2025 18:09:32 +0000 (0:00:01.228) 0:01:28.650 *********** 2025-07-04 18:14:40.383876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.383885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.383918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.383946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.383965 | orchestrator | 2025-07-04 18:14:40.383972 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-07-04 18:14:40.383979 | orchestrator | Friday 04 July 2025 18:09:40 +0000 (0:00:07.473) 0:01:36.124 *********** 2025-07-04 18:14:40.383989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.383997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.384053 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.384060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384074 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.384085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.384096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384117 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.384124 | orchestrator | 2025-07-04 18:14:40.384131 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-07-04 18:14:40.384137 | orchestrator | Friday 04 July 2025 18:09:41 +0000 (0:00:01.122) 0:01:37.247 *********** 2025-07-04 18:14:40.384144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-04 18:14:40.384151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-04 18:14:40.384158 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.384164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-04 18:14:40.384171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-04 18:14:40.384178 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.384184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-04 18:14:40.384191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-04 18:14:40.384198 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.384205 | orchestrator | 2025-07-04 18:14:40.384212 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-07-04 18:14:40.384218 | orchestrator | Friday 04 July 2025 18:09:43 +0000 (0:00:02.513) 0:01:39.760 *********** 2025-07-04 18:14:40.384225 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.384232 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.384238 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.384245 | orchestrator | 2025-07-04 18:14:40.384252 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-07-04 18:14:40.384262 | orchestrator | Friday 04 July 2025 18:09:45 +0000 (0:00:01.889) 0:01:41.650 *********** 2025-07-04 18:14:40.384269 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.384276 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.384282 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.384289 | orchestrator | 2025-07-04 18:14:40.384296 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-07-04 18:14:40.384302 | orchestrator | Friday 04 July 2025 18:09:47 +0000 (0:00:02.039) 0:01:43.690 *********** 2025-07-04 18:14:40.384309 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.384315 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.384322 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.384329 | orchestrator | 2025-07-04 18:14:40.384335 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-07-04 18:14:40.384342 | orchestrator | Friday 04 July 2025 18:09:48 +0000 (0:00:00.308) 0:01:43.999 *********** 2025-07-04 18:14:40.384348 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.384380 | orchestrator | 2025-07-04 18:14:40.384388 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-07-04 18:14:40.384395 | orchestrator | Friday 04 July 2025 18:09:48 +0000 (0:00:00.653) 0:01:44.652 *********** 2025-07-04 18:14:40.384416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-04 18:14:40.384532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-04 18:14:40.384541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-04 18:14:40.384548 | orchestrator | 2025-07-04 18:14:40.384554 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-07-04 18:14:40.384561 | orchestrator | Friday 04 July 2025 18:09:51 +0000 (0:00:02.725) 0:01:47.377 *********** 2025-07-04 18:14:40.384573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-04 18:14:40.384580 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.384587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-04 18:14:40.384601 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.384613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-04 18:14:40.384621 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.384627 | orchestrator | 2025-07-04 18:14:40.384634 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-07-04 18:14:40.384640 | orchestrator | Friday 04 July 2025 18:09:52 +0000 (0:00:01.299) 0:01:48.676 *********** 2025-07-04 18:14:40.384649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-04 18:14:40.384658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-04 18:14:40.384666 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.384673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-04 18:14:40.384680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-04 18:14:40.384687 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.384698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-04 18:14:40.384710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-04 18:14:40.384717 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.384724 | orchestrator | 2025-07-04 18:14:40.384730 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-07-04 18:14:40.384737 | orchestrator | Friday 04 July 2025 18:09:54 +0000 (0:00:01.494) 0:01:50.171 *********** 2025-07-04 18:14:40.384743 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.384750 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.384756 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.384763 | orchestrator | 2025-07-04 18:14:40.384769 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-07-04 18:14:40.384776 | orchestrator | Friday 04 July 2025 18:09:54 +0000 (0:00:00.698) 0:01:50.869 *********** 2025-07-04 18:14:40.384783 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.384789 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.384796 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.384803 | orchestrator | 2025-07-04 18:14:40.384809 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-07-04 18:14:40.384821 | orchestrator | Friday 04 July 2025 18:09:55 +0000 (0:00:01.018) 0:01:51.888 *********** 2025-07-04 18:14:40.384828 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.384835 | orchestrator | 2025-07-04 18:14:40.384842 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-07-04 18:14:40.384848 | orchestrator | Friday 04 July 2025 18:09:56 +0000 (0:00:00.994) 0:01:52.882 *********** 2025-07-04 18:14:40.384855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.384863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.384907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.384962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.384987 | orchestrator | 2025-07-04 18:14:40.384997 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-07-04 18:14:40.385008 | orchestrator | Friday 04 July 2025 18:10:00 +0000 (0:00:03.848) 0:01:56.731 *********** 2025-07-04 18:14:40.385020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.385049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385093 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.385104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.385123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.385167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385174 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.385181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385211 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.385218 | orchestrator | 2025-07-04 18:14:40.385225 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-07-04 18:14:40.385232 | orchestrator | Friday 04 July 2025 18:10:02 +0000 (0:00:01.246) 0:01:57.978 *********** 2025-07-04 18:14:40.385239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-04 18:14:40.385246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-04 18:14:40.385254 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.385261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-04 18:14:40.385267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-04 18:14:40.385274 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.385285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-04 18:14:40.385293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-04 18:14:40.385300 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.385307 | orchestrator | 2025-07-04 18:14:40.385313 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-07-04 18:14:40.385320 | orchestrator | Friday 04 July 2025 18:10:03 +0000 (0:00:01.182) 0:01:59.160 *********** 2025-07-04 18:14:40.385326 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.385333 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.385340 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.385347 | orchestrator | 2025-07-04 18:14:40.385353 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-07-04 18:14:40.385417 | orchestrator | Friday 04 July 2025 18:10:04 +0000 (0:00:01.229) 0:02:00.389 *********** 2025-07-04 18:14:40.385425 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.385432 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.385438 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.385445 | orchestrator | 2025-07-04 18:14:40.385451 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-07-04 18:14:40.385458 | orchestrator | Friday 04 July 2025 18:10:07 +0000 (0:00:02.518) 0:02:02.907 *********** 2025-07-04 18:14:40.385465 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.385567 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.385575 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.385581 | orchestrator | 2025-07-04 18:14:40.385588 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-07-04 18:14:40.385594 | orchestrator | Friday 04 July 2025 18:10:07 +0000 (0:00:00.627) 0:02:03.535 *********** 2025-07-04 18:14:40.385601 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.385608 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.385614 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.385621 | orchestrator | 2025-07-04 18:14:40.385627 | orchestrator | TASK [include_role : designate] ************************************************ 2025-07-04 18:14:40.385634 | orchestrator | Friday 04 July 2025 18:10:08 +0000 (0:00:00.391) 0:02:03.926 *********** 2025-07-04 18:14:40.385641 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.385647 | orchestrator | 2025-07-04 18:14:40.385654 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-07-04 18:14:40.385661 | orchestrator | Friday 04 July 2025 18:10:08 +0000 (0:00:00.835) 0:02:04.762 *********** 2025-07-04 18:14:40.385669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:14:40.385681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:14:40.385695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:14:40.385707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:14:40.385721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:14:40.385856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:14:40.385866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385921 | orchestrator | 2025-07-04 18:14:40.385931 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-07-04 18:14:40.385953 | orchestrator | Friday 04 July 2025 18:10:13 +0000 (0:00:04.306) 0:02:09.069 *********** 2025-07-04 18:14:40.385967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:14:40.385974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:14:40.385980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.385997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.386004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.386055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.386068 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.386090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:14:40.386100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:14:40.386111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.386123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.386135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.386174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.386189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.386196 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.386202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:14:40.386248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:14:40.386258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.386268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.386280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.386298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.386305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.386312 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.386318 | orchestrator | 2025-07-04 18:14:40.386324 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-07-04 18:14:40.386331 | orchestrator | Friday 04 July 2025 18:10:14 +0000 (0:00:01.094) 0:02:10.163 *********** 2025-07-04 18:14:40.386410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-04 18:14:40.386419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-04 18:14:40.386426 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.386432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-04 18:14:40.386439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-04 18:14:40.386445 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.386451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-04 18:14:40.386458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-04 18:14:40.386469 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.386475 | orchestrator | 2025-07-04 18:14:40.386481 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-07-04 18:14:40.386488 | orchestrator | Friday 04 July 2025 18:10:15 +0000 (0:00:01.076) 0:02:11.239 *********** 2025-07-04 18:14:40.386494 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.386500 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.386506 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.386513 | orchestrator | 2025-07-04 18:14:40.386522 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-07-04 18:14:40.386529 | orchestrator | Friday 04 July 2025 18:10:16 +0000 (0:00:01.625) 0:02:12.865 *********** 2025-07-04 18:14:40.386535 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.386541 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.386548 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.386554 | orchestrator | 2025-07-04 18:14:40.386560 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-07-04 18:14:40.386566 | orchestrator | Friday 04 July 2025 18:10:18 +0000 (0:00:01.941) 0:02:14.806 *********** 2025-07-04 18:14:40.386572 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.386578 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.386584 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.386590 | orchestrator | 2025-07-04 18:14:40.386597 | orchestrator | TASK [include_role : glance] *************************************************** 2025-07-04 18:14:40.386604 | orchestrator | Friday 04 July 2025 18:10:19 +0000 (0:00:00.299) 0:02:15.105 *********** 2025-07-04 18:14:40.386610 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.386616 | orchestrator | 2025-07-04 18:14:40.386622 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-07-04 18:14:40.386628 | orchestrator | Friday 04 July 2025 18:10:19 +0000 (0:00:00.788) 0:02:15.894 *********** 2025-07-04 18:14:40.386643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:14:40.386655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:14:40.386673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-04 18:14:40.386684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-04 18:14:40.386701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:14:40.386709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-04 18:14:40.386720 | orchestrator | 2025-07-04 18:14:40.386727 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-07-04 18:14:40.386733 | orchestrator | Friday 04 July 2025 18:10:24 +0000 (0:00:04.737) 0:02:20.631 *********** 2025-07-04 18:14:40.386747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-04 18:14:40.386756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-04 18:14:40.386770 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.386780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-04 18:14:40.386792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-04 18:14:40.386803 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.386814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-04 18:14:40.386827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-04 18:14:40.386839 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.386845 | orchestrator | 2025-07-04 18:14:40.386852 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-07-04 18:14:40.386858 | orchestrator | Friday 04 July 2025 18:10:28 +0000 (0:00:03.875) 0:02:24.507 *********** 2025-07-04 18:14:40.386864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-04 18:14:40.386871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-04 18:14:40.386878 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.386888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-04 18:14:40.386895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-04 18:14:40.386901 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.386908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-04 18:14:40.386919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-04 18:14:40.386925 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.386932 | orchestrator | 2025-07-04 18:14:40.386938 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-07-04 18:14:40.386944 | orchestrator | Friday 04 July 2025 18:10:32 +0000 (0:00:03.536) 0:02:28.043 *********** 2025-07-04 18:14:40.386954 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.386961 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.386967 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.386973 | orchestrator | 2025-07-04 18:14:40.386979 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-07-04 18:14:40.386985 | orchestrator | Friday 04 July 2025 18:10:33 +0000 (0:00:01.580) 0:02:29.623 *********** 2025-07-04 18:14:40.386991 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.386997 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.387003 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.387009 | orchestrator | 2025-07-04 18:14:40.387015 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-07-04 18:14:40.387021 | orchestrator | Friday 04 July 2025 18:10:35 +0000 (0:00:01.945) 0:02:31.569 *********** 2025-07-04 18:14:40.387028 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.387034 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.387040 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.387046 | orchestrator | 2025-07-04 18:14:40.387052 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-07-04 18:14:40.387058 | orchestrator | Friday 04 July 2025 18:10:35 +0000 (0:00:00.271) 0:02:31.840 *********** 2025-07-04 18:14:40.387064 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.387070 | orchestrator | 2025-07-04 18:14:40.387076 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-07-04 18:14:40.387082 | orchestrator | Friday 04 July 2025 18:10:36 +0000 (0:00:00.806) 0:02:32.647 *********** 2025-07-04 18:14:40.387088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:14:40.387099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:14:40.387106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:14:40.387112 | orchestrator | 2025-07-04 18:14:40.387118 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-07-04 18:14:40.387125 | orchestrator | Friday 04 July 2025 18:10:39 +0000 (0:00:03.193) 0:02:35.840 *********** 2025-07-04 18:14:40.387141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-04 18:14:40.387147 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.387154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-04 18:14:40.387160 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.387167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-04 18:14:40.387174 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.387180 | orchestrator | 2025-07-04 18:14:40.387186 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-07-04 18:14:40.387192 | orchestrator | Friday 04 July 2025 18:10:40 +0000 (0:00:00.375) 0:02:36.215 *********** 2025-07-04 18:14:40.387255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-04 18:14:40.387264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-04 18:14:40.387271 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.387281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-04 18:14:40.387287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-04 18:14:40.387294 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.387300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-04 18:14:40.387306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-04 18:14:40.387318 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.387324 | orchestrator | 2025-07-04 18:14:40.387330 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-07-04 18:14:40.387336 | orchestrator | Friday 04 July 2025 18:10:40 +0000 (0:00:00.611) 0:02:36.826 *********** 2025-07-04 18:14:40.387342 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.387349 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.387368 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.387375 | orchestrator | 2025-07-04 18:14:40.387381 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-07-04 18:14:40.387388 | orchestrator | Friday 04 July 2025 18:10:42 +0000 (0:00:01.474) 0:02:38.301 *********** 2025-07-04 18:14:40.387394 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.387400 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.387406 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.387412 | orchestrator | 2025-07-04 18:14:40.387423 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-07-04 18:14:40.387430 | orchestrator | Friday 04 July 2025 18:10:44 +0000 (0:00:01.927) 0:02:40.229 *********** 2025-07-04 18:14:40.387436 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.387442 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.387448 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.387454 | orchestrator | 2025-07-04 18:14:40.387460 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-07-04 18:14:40.387466 | orchestrator | Friday 04 July 2025 18:10:44 +0000 (0:00:00.355) 0:02:40.584 *********** 2025-07-04 18:14:40.387473 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.387479 | orchestrator | 2025-07-04 18:14:40.387485 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-07-04 18:14:40.387491 | orchestrator | Friday 04 July 2025 18:10:45 +0000 (0:00:00.979) 0:02:41.564 *********** 2025-07-04 18:14:40.387502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-04 18:14:40.387520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-04 18:14:40.387532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-04 18:14:40.387543 | orchestrator | 2025-07-04 18:14:40.387549 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-07-04 18:14:40.387556 | orchestrator | Friday 04 July 2025 18:10:50 +0000 (0:00:04.371) 0:02:45.935 *********** 2025-07-04 18:14:40.387567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-04 18:14:40.387574 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.387587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-04 18:14:40.387599 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.387610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-04 18:14:40.387618 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.387624 | orchestrator | 2025-07-04 18:14:40.387630 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-07-04 18:14:40.387636 | orchestrator | Friday 04 July 2025 18:10:50 +0000 (0:00:00.625) 0:02:46.560 *********** 2025-07-04 18:14:40.387644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-04 18:14:40.387656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-04 18:14:40.387666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-04 18:14:40.387674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-04 18:14:40.387680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-04 18:14:40.387687 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.387694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-04 18:14:40.387700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-04 18:14:40.387711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-04 18:14:40.387718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-04 18:14:40.387724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-04 18:14:40.387731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-04 18:14:40.387737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-04 18:14:40.387744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-04 18:14:40.387754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-04 18:14:40.387760 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.387766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-04 18:14:40.387772 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.387778 | orchestrator | 2025-07-04 18:14:40.387785 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-07-04 18:14:40.387791 | orchestrator | Friday 04 July 2025 18:10:51 +0000 (0:00:00.895) 0:02:47.456 *********** 2025-07-04 18:14:40.387797 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.387803 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.387809 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.387815 | orchestrator | 2025-07-04 18:14:40.387821 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-07-04 18:14:40.387830 | orchestrator | Friday 04 July 2025 18:10:53 +0000 (0:00:01.477) 0:02:48.933 *********** 2025-07-04 18:14:40.387836 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.387843 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.387849 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.387855 | orchestrator | 2025-07-04 18:14:40.387861 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-07-04 18:14:40.387867 | orchestrator | Friday 04 July 2025 18:10:54 +0000 (0:00:01.847) 0:02:50.781 *********** 2025-07-04 18:14:40.387873 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.387880 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.387886 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.387892 | orchestrator | 2025-07-04 18:14:40.387898 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-07-04 18:14:40.387904 | orchestrator | Friday 04 July 2025 18:10:55 +0000 (0:00:00.303) 0:02:51.084 *********** 2025-07-04 18:14:40.387910 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.387916 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.387922 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.387928 | orchestrator | 2025-07-04 18:14:40.387934 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-07-04 18:14:40.387940 | orchestrator | Friday 04 July 2025 18:10:55 +0000 (0:00:00.391) 0:02:51.476 *********** 2025-07-04 18:14:40.387946 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.387952 | orchestrator | 2025-07-04 18:14:40.387958 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-07-04 18:14:40.387964 | orchestrator | Friday 04 July 2025 18:10:56 +0000 (0:00:01.177) 0:02:52.653 *********** 2025-07-04 18:14:40.387976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:14:40.387991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:14:40.387998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-04 18:14:40.388008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:14:40.388016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:14:40.388027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-04 18:14:40.388034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:14:40.388045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:14:40.388051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-04 18:14:40.388058 | orchestrator | 2025-07-04 18:14:40.388064 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-07-04 18:14:40.388074 | orchestrator | Friday 04 July 2025 18:11:01 +0000 (0:00:04.527) 0:02:57.181 *********** 2025-07-04 18:14:40.388081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-04 18:14:40.388094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:14:40.388101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-04 18:14:40.388111 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.388118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-04 18:14:40.388125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:14:40.388134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-04 18:14:40.388141 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.388163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-04 18:14:40.388175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:14:40.388181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-04 18:14:40.388188 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.388194 | orchestrator | 2025-07-04 18:14:40.388200 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-07-04 18:14:40.388206 | orchestrator | Friday 04 July 2025 18:11:01 +0000 (0:00:00.580) 0:02:57.762 *********** 2025-07-04 18:14:40.388213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-04 18:14:40.388220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-04 18:14:40.388226 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.388232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-04 18:14:40.388242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-04 18:14:40.388249 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.388256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-04 18:14:40.388262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-04 18:14:40.388269 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.388275 | orchestrator | 2025-07-04 18:14:40.388282 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-07-04 18:14:40.388288 | orchestrator | Friday 04 July 2025 18:11:02 +0000 (0:00:01.040) 0:02:58.802 *********** 2025-07-04 18:14:40.388294 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.388300 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.388306 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.388316 | orchestrator | 2025-07-04 18:14:40.388322 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-07-04 18:14:40.388329 | orchestrator | Friday 04 July 2025 18:11:04 +0000 (0:00:01.351) 0:03:00.154 *********** 2025-07-04 18:14:40.388335 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.388341 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.388347 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.388353 | orchestrator | 2025-07-04 18:14:40.388372 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-07-04 18:14:40.388378 | orchestrator | Friday 04 July 2025 18:11:06 +0000 (0:00:02.058) 0:03:02.212 *********** 2025-07-04 18:14:40.388395 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.388402 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.388408 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.388414 | orchestrator | 2025-07-04 18:14:40.388420 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-07-04 18:14:40.388427 | orchestrator | Friday 04 July 2025 18:11:06 +0000 (0:00:00.325) 0:03:02.537 *********** 2025-07-04 18:14:40.388433 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.388439 | orchestrator | 2025-07-04 18:14:40.388445 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-07-04 18:14:40.388451 | orchestrator | Friday 04 July 2025 18:11:07 +0000 (0:00:01.286) 0:03:03.824 *********** 2025-07-04 18:14:40.388458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:14:40.388465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:14:40.388488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:14:40.388528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388534 | orchestrator | 2025-07-04 18:14:40.388541 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-07-04 18:14:40.388547 | orchestrator | Friday 04 July 2025 18:11:11 +0000 (0:00:03.578) 0:03:07.402 *********** 2025-07-04 18:14:40.388554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-04 18:14:40.388565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388576 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.388594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-04 18:14:40.388602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388608 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.388615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-04 18:14:40.388621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388632 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.388638 | orchestrator | 2025-07-04 18:14:40.388647 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-07-04 18:14:40.388654 | orchestrator | Friday 04 July 2025 18:11:12 +0000 (0:00:00.589) 0:03:07.992 *********** 2025-07-04 18:14:40.388660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-04 18:14:40.388667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-04 18:14:40.388673 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.388679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-04 18:14:40.388686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-04 18:14:40.388692 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.388698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-04 18:14:40.388705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-04 18:14:40.388722 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.388729 | orchestrator | 2025-07-04 18:14:40.388736 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-07-04 18:14:40.388742 | orchestrator | Friday 04 July 2025 18:11:13 +0000 (0:00:01.166) 0:03:09.158 *********** 2025-07-04 18:14:40.388748 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.388754 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.388760 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.388766 | orchestrator | 2025-07-04 18:14:40.388772 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-07-04 18:14:40.388779 | orchestrator | Friday 04 July 2025 18:11:14 +0000 (0:00:01.225) 0:03:10.384 *********** 2025-07-04 18:14:40.388785 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.388791 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.388797 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.388803 | orchestrator | 2025-07-04 18:14:40.388810 | orchestrator | TASK [include_role : manila] *************************************************** 2025-07-04 18:14:40.388816 | orchestrator | Friday 04 July 2025 18:11:16 +0000 (0:00:02.025) 0:03:12.409 *********** 2025-07-04 18:14:40.388822 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.388839 | orchestrator | 2025-07-04 18:14:40.388845 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-07-04 18:14:40.388851 | orchestrator | Friday 04 July 2025 18:11:17 +0000 (0:00:00.984) 0:03:13.393 *********** 2025-07-04 18:14:40.388858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-04 18:14:40.388869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-04 18:14:40.388902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-04 18:14:40.388966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.388998 | orchestrator | 2025-07-04 18:14:40.389004 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-07-04 18:14:40.389010 | orchestrator | Friday 04 July 2025 18:11:20 +0000 (0:00:03.389) 0:03:16.783 *********** 2025-07-04 18:14:40.389017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-04 18:14:40.389028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.389038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.389045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.389052 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.389069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-04 18:14:40.389076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.389087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.389093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.389100 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.389110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-04 18:14:40.389130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.389137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.389143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.389154 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.389160 | orchestrator | 2025-07-04 18:14:40.389166 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-07-04 18:14:40.389173 | orchestrator | Friday 04 July 2025 18:11:21 +0000 (0:00:00.599) 0:03:17.382 *********** 2025-07-04 18:14:40.389179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-04 18:14:40.389185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-04 18:14:40.389192 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.389198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-04 18:14:40.389204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-04 18:14:40.389211 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.389217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-04 18:14:40.389228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-04 18:14:40.389235 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.389241 | orchestrator | 2025-07-04 18:14:40.389257 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-07-04 18:14:40.389264 | orchestrator | Friday 04 July 2025 18:11:22 +0000 (0:00:00.772) 0:03:18.155 *********** 2025-07-04 18:14:40.389270 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.389276 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.389282 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.389288 | orchestrator | 2025-07-04 18:14:40.389294 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-07-04 18:14:40.389300 | orchestrator | Friday 04 July 2025 18:11:23 +0000 (0:00:01.464) 0:03:19.619 *********** 2025-07-04 18:14:40.389307 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.389313 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.389319 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.389325 | orchestrator | 2025-07-04 18:14:40.389331 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-07-04 18:14:40.389337 | orchestrator | Friday 04 July 2025 18:11:25 +0000 (0:00:02.110) 0:03:21.730 *********** 2025-07-04 18:14:40.389343 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.389350 | orchestrator | 2025-07-04 18:14:40.389417 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-07-04 18:14:40.389432 | orchestrator | Friday 04 July 2025 18:11:26 +0000 (0:00:01.074) 0:03:22.804 *********** 2025-07-04 18:14:40.389439 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-04 18:14:40.389446 | orchestrator | 2025-07-04 18:14:40.389452 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-07-04 18:14:40.389463 | orchestrator | Friday 04 July 2025 18:11:30 +0000 (0:00:03.208) 0:03:26.012 *********** 2025-07-04 18:14:40.389486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:14:40.389495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-04 18:14:40.389501 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.389517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:14:40.389529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-04 18:14:40.389535 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.389542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:14:40.389553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-04 18:14:40.389560 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.389566 | orchestrator | 2025-07-04 18:14:40.389572 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-07-04 18:14:40.389578 | orchestrator | Friday 04 July 2025 18:11:33 +0000 (0:00:03.227) 0:03:29.240 *********** 2025-07-04 18:14:40.389596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:14:40.389607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-04 18:14:40.389613 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.389622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:14:40.389636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-04 18:14:40.389642 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.389647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:14:40.389659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-04 18:14:40.389668 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.389681 | orchestrator | 2025-07-04 18:14:40.389695 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-07-04 18:14:40.389703 | orchestrator | Friday 04 July 2025 18:11:36 +0000 (0:00:03.623) 0:03:32.863 *********** 2025-07-04 18:14:40.389713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-04 18:14:40.389749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-04 18:14:40.389759 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.389768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-04 18:14:40.389777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-04 18:14:40.389785 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.389794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-04 18:14:40.389804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-04 18:14:40.389814 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.389824 | orchestrator | 2025-07-04 18:14:40.389833 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-07-04 18:14:40.389846 | orchestrator | Friday 04 July 2025 18:11:40 +0000 (0:00:03.255) 0:03:36.119 *********** 2025-07-04 18:14:40.389856 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.389864 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.389875 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.389880 | orchestrator | 2025-07-04 18:14:40.389886 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-07-04 18:14:40.389891 | orchestrator | Friday 04 July 2025 18:11:42 +0000 (0:00:02.198) 0:03:38.318 *********** 2025-07-04 18:14:40.389896 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.389902 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.389907 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.389912 | orchestrator | 2025-07-04 18:14:40.389917 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-07-04 18:14:40.389923 | orchestrator | Friday 04 July 2025 18:11:43 +0000 (0:00:01.491) 0:03:39.809 *********** 2025-07-04 18:14:40.389928 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.389933 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.389939 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.389944 | orchestrator | 2025-07-04 18:14:40.389949 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-07-04 18:14:40.389955 | orchestrator | Friday 04 July 2025 18:11:44 +0000 (0:00:00.339) 0:03:40.148 *********** 2025-07-04 18:14:40.389960 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.389965 | orchestrator | 2025-07-04 18:14:40.389970 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-07-04 18:14:40.389976 | orchestrator | Friday 04 July 2025 18:11:45 +0000 (0:00:01.107) 0:03:41.255 *********** 2025-07-04 18:14:40.389988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-04 18:14:40.389995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-04 18:14:40.390009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-04 18:14:40.390037 | orchestrator | 2025-07-04 18:14:40.390044 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-07-04 18:14:40.390054 | orchestrator | Friday 04 July 2025 18:11:47 +0000 (0:00:01.872) 0:03:43.128 *********** 2025-07-04 18:14:40.390064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-04 18:14:40.390070 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.390076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-04 18:14:40.390081 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.390093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/2025-07-04 18:14:40 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:40.390099 | orchestrator | 2025-07-04 18:14:40 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:40.390105 | orchestrator | timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-04 18:14:40.390110 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.390116 | orchestrator | 2025-07-04 18:14:40.390121 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-07-04 18:14:40.390126 | orchestrator | Friday 04 July 2025 18:11:47 +0000 (0:00:00.397) 0:03:43.526 *********** 2025-07-04 18:14:40.390133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-04 18:14:40.390139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-04 18:14:40.390144 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.390150 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.390155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-04 18:14:40.390165 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.390170 | orchestrator | 2025-07-04 18:14:40.390176 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-07-04 18:14:40.390181 | orchestrator | Friday 04 July 2025 18:11:48 +0000 (0:00:00.742) 0:03:44.268 *********** 2025-07-04 18:14:40.390186 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.390191 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.390197 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.390202 | orchestrator | 2025-07-04 18:14:40.390207 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-07-04 18:14:40.390213 | orchestrator | Friday 04 July 2025 18:11:49 +0000 (0:00:00.730) 0:03:44.999 *********** 2025-07-04 18:14:40.390218 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.390223 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.390229 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.390234 | orchestrator | 2025-07-04 18:14:40.390239 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-07-04 18:14:40.390245 | orchestrator | Friday 04 July 2025 18:11:50 +0000 (0:00:01.274) 0:03:46.273 *********** 2025-07-04 18:14:40.390250 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.390255 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.390260 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.390266 | orchestrator | 2025-07-04 18:14:40.390276 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-07-04 18:14:40.390282 | orchestrator | Friday 04 July 2025 18:11:50 +0000 (0:00:00.322) 0:03:46.596 *********** 2025-07-04 18:14:40.390287 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.390292 | orchestrator | 2025-07-04 18:14:40.390298 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-07-04 18:14:40.390303 | orchestrator | Friday 04 July 2025 18:11:52 +0000 (0:00:01.428) 0:03:48.024 *********** 2025-07-04 18:14:40.390309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:14:40.390327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-04 18:14:40.390372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:14:40.390411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:14:40.390462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-04 18:14:40.390489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:14:40.390505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-04 18:14:40.390536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:14:40.390580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-04 18:14:40.390627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:14:40.390633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:14:40.390648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-04 18:14:40.390681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:14:40.390727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-04 18:14:40.390768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:14:40.390774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390780 | orchestrator | 2025-07-04 18:14:40.390785 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-07-04 18:14:40.390821 | orchestrator | Friday 04 July 2025 18:11:56 +0000 (0:00:04.605) 0:03:52.630 *********** 2025-07-04 18:14:40.390831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:14:40.390837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:14:40.390880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-04 18:14:40.390894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-04 18:14:40.390954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.390966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:14:40.390972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.390998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.391016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-04 18:14:40.391032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.391044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:14:40.391050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:14:40.391084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-04 18:14:40.391091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-04 18:14:40.391097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:14:40.391115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.391121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391148 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.391158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-04 18:14:40.391169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-04 18:14:40.391179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:14:40.391185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.391206 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.391212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.391218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:14:40.391294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-04 18:14:40.391318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-04 18:14:40.391324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-04 18:14:40.391370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:14:40.391377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391383 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.391388 | orchestrator | 2025-07-04 18:14:40.391394 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-07-04 18:14:40.391399 | orchestrator | Friday 04 July 2025 18:11:58 +0000 (0:00:01.521) 0:03:54.152 *********** 2025-07-04 18:14:40.391405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-04 18:14:40.391411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-04 18:14:40.391417 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.391422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-04 18:14:40.391441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-04 18:14:40.391447 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.391452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-04 18:14:40.391458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-04 18:14:40.391463 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.391468 | orchestrator | 2025-07-04 18:14:40.391474 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-07-04 18:14:40.391479 | orchestrator | Friday 04 July 2025 18:12:00 +0000 (0:00:02.141) 0:03:56.293 *********** 2025-07-04 18:14:40.391484 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.391490 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.391495 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.391501 | orchestrator | 2025-07-04 18:14:40.391506 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-07-04 18:14:40.391514 | orchestrator | Friday 04 July 2025 18:12:01 +0000 (0:00:01.380) 0:03:57.674 *********** 2025-07-04 18:14:40.391520 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.391525 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.391530 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.391536 | orchestrator | 2025-07-04 18:14:40.391541 | orchestrator | TASK [include_role : placement] ************************************************ 2025-07-04 18:14:40.391546 | orchestrator | Friday 04 July 2025 18:12:03 +0000 (0:00:02.078) 0:03:59.753 *********** 2025-07-04 18:14:40.391551 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.391557 | orchestrator | 2025-07-04 18:14:40.391562 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-07-04 18:14:40.391568 | orchestrator | Friday 04 July 2025 18:12:05 +0000 (0:00:01.399) 0:04:01.152 *********** 2025-07-04 18:14:40.391588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.391595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.391605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.391611 | orchestrator | 2025-07-04 18:14:40.391616 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-07-04 18:14:40.391621 | orchestrator | Friday 04 July 2025 18:12:09 +0000 (0:00:04.437) 0:04:05.590 *********** 2025-07-04 18:14:40.391630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.391635 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.391641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.391647 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.391658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.391667 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.391673 | orchestrator | 2025-07-04 18:14:40.391679 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-07-04 18:14:40.391684 | orchestrator | Friday 04 July 2025 18:12:10 +0000 (0:00:00.500) 0:04:06.090 *********** 2025-07-04 18:14:40.391690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-04 18:14:40.391696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-04 18:14:40.391702 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.391708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-04 18:14:40.391713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-04 18:14:40.391719 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.391724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-04 18:14:40.391730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-04 18:14:40.391735 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.391740 | orchestrator | 2025-07-04 18:14:40.391746 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-07-04 18:14:40.391751 | orchestrator | Friday 04 July 2025 18:12:10 +0000 (0:00:00.790) 0:04:06.880 *********** 2025-07-04 18:14:40.391756 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.391764 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.391770 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.391775 | orchestrator | 2025-07-04 18:14:40.391780 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-07-04 18:14:40.391786 | orchestrator | Friday 04 July 2025 18:12:12 +0000 (0:00:01.657) 0:04:08.537 *********** 2025-07-04 18:14:40.391791 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.391796 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.391802 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.391807 | orchestrator | 2025-07-04 18:14:40.391812 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-07-04 18:14:40.391818 | orchestrator | Friday 04 July 2025 18:12:14 +0000 (0:00:02.125) 0:04:10.663 *********** 2025-07-04 18:14:40.391823 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.391828 | orchestrator | 2025-07-04 18:14:40.391834 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-07-04 18:14:40.391839 | orchestrator | Friday 04 July 2025 18:12:16 +0000 (0:00:01.247) 0:04:11.910 *********** 2025-07-04 18:14:40.391857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.391868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.391900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.391910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.391940 | orchestrator | 2025-07-04 18:14:40.391955 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-07-04 18:14:40.391963 | orchestrator | Friday 04 July 2025 18:12:19 +0000 (0:00:03.855) 0:04:15.766 *********** 2025-07-04 18:14:40.391989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.392006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.392016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.392025 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.392035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.392053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.392060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.392069 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.392087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.392093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.392099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.392104 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.392110 | orchestrator | 2025-07-04 18:14:40.392115 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-07-04 18:14:40.392120 | orchestrator | Friday 04 July 2025 18:12:20 +0000 (0:00:00.985) 0:04:16.751 *********** 2025-07-04 18:14:40.392126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-04 18:14:40.392135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-04 18:14:40.392141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-04 18:14:40.392151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-04 18:14:40.392156 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.392162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-04 18:14:40.392167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-04 18:14:40.392173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-04 18:14:40.392188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-04 18:14:40.392194 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.392200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-04 18:14:40.392205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-04 18:14:40.392211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-04 18:14:40.392216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-04 18:14:40.392222 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.392227 | orchestrator | 2025-07-04 18:14:40.392232 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-07-04 18:14:40.392238 | orchestrator | Friday 04 July 2025 18:12:21 +0000 (0:00:00.861) 0:04:17.612 *********** 2025-07-04 18:14:40.392243 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.392249 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.392254 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.392259 | orchestrator | 2025-07-04 18:14:40.392264 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-07-04 18:14:40.392270 | orchestrator | Friday 04 July 2025 18:12:23 +0000 (0:00:01.592) 0:04:19.205 *********** 2025-07-04 18:14:40.392275 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.392280 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.392286 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.392291 | orchestrator | 2025-07-04 18:14:40.392296 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-07-04 18:14:40.392301 | orchestrator | Friday 04 July 2025 18:12:25 +0000 (0:00:02.222) 0:04:21.427 *********** 2025-07-04 18:14:40.392307 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.392312 | orchestrator | 2025-07-04 18:14:40.392317 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-07-04 18:14:40.392323 | orchestrator | Friday 04 July 2025 18:12:27 +0000 (0:00:01.643) 0:04:23.070 *********** 2025-07-04 18:14:40.392332 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-07-04 18:14:40.392338 | orchestrator | 2025-07-04 18:14:40.392343 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-07-04 18:14:40.392348 | orchestrator | Friday 04 July 2025 18:12:28 +0000 (0:00:01.071) 0:04:24.142 *********** 2025-07-04 18:14:40.392370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-04 18:14:40.392377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-04 18:14:40.392383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-04 18:14:40.392388 | orchestrator | 2025-07-04 18:14:40.392394 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-07-04 18:14:40.392400 | orchestrator | Friday 04 July 2025 18:12:32 +0000 (0:00:03.811) 0:04:27.953 *********** 2025-07-04 18:14:40.392416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-04 18:14:40.392422 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.392428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-04 18:14:40.392434 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.392439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-04 18:14:40.392476 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.392482 | orchestrator | 2025-07-04 18:14:40.392488 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-07-04 18:14:40.392493 | orchestrator | Friday 04 July 2025 18:12:33 +0000 (0:00:01.271) 0:04:29.225 *********** 2025-07-04 18:14:40.392499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-04 18:14:40.392505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-04 18:14:40.392511 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.392517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-04 18:14:40.392526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-04 18:14:40.392531 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.392537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-04 18:14:40.392543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-04 18:14:40.392548 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.392553 | orchestrator | 2025-07-04 18:14:40.392559 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-04 18:14:40.392564 | orchestrator | Friday 04 July 2025 18:12:35 +0000 (0:00:01.901) 0:04:31.126 *********** 2025-07-04 18:14:40.392569 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.392575 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.392580 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.392585 | orchestrator | 2025-07-04 18:14:40.392591 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-04 18:14:40.392596 | orchestrator | Friday 04 July 2025 18:12:37 +0000 (0:00:02.251) 0:04:33.378 *********** 2025-07-04 18:14:40.392602 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.392607 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.392612 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.392618 | orchestrator | 2025-07-04 18:14:40.392623 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-07-04 18:14:40.392628 | orchestrator | Friday 04 July 2025 18:12:40 +0000 (0:00:03.179) 0:04:36.558 *********** 2025-07-04 18:14:40.392644 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-07-04 18:14:40.392650 | orchestrator | 2025-07-04 18:14:40.392656 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-07-04 18:14:40.392661 | orchestrator | Friday 04 July 2025 18:12:41 +0000 (0:00:00.751) 0:04:37.310 *********** 2025-07-04 18:14:40.392667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-04 18:14:40.392677 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.392683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-04 18:14:40.392689 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.392695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-04 18:14:40.392700 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.392706 | orchestrator | 2025-07-04 18:14:40.392711 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-07-04 18:14:40.392717 | orchestrator | Friday 04 July 2025 18:12:42 +0000 (0:00:01.117) 0:04:38.427 *********** 2025-07-04 18:14:40.392722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-04 18:14:40.392731 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.392736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-04 18:14:40.392742 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.392747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-04 18:14:40.392753 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.392758 | orchestrator | 2025-07-04 18:14:40.392764 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-07-04 18:14:40.392769 | orchestrator | Friday 04 July 2025 18:12:43 +0000 (0:00:01.351) 0:04:39.779 *********** 2025-07-04 18:14:40.392774 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.392790 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.392801 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.392806 | orchestrator | 2025-07-04 18:14:40.392812 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-04 18:14:40.392817 | orchestrator | Friday 04 July 2025 18:12:44 +0000 (0:00:01.093) 0:04:40.872 *********** 2025-07-04 18:14:40.392822 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.392828 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.392833 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.392839 | orchestrator | 2025-07-04 18:14:40.392844 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-04 18:14:40.392849 | orchestrator | Friday 04 July 2025 18:12:47 +0000 (0:00:02.255) 0:04:43.128 *********** 2025-07-04 18:14:40.392855 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.392860 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.392865 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.392871 | orchestrator | 2025-07-04 18:14:40.392876 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-07-04 18:14:40.392881 | orchestrator | Friday 04 July 2025 18:12:50 +0000 (0:00:03.153) 0:04:46.282 *********** 2025-07-04 18:14:40.392886 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-07-04 18:14:40.392892 | orchestrator | 2025-07-04 18:14:40.392897 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-07-04 18:14:40.392902 | orchestrator | Friday 04 July 2025 18:12:51 +0000 (0:00:01.101) 0:04:47.383 *********** 2025-07-04 18:14:40.392908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-04 18:14:40.392914 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.392919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-04 18:14:40.392925 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.392933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-04 18:14:40.392939 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.392945 | orchestrator | 2025-07-04 18:14:40.392950 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-07-04 18:14:40.392955 | orchestrator | Friday 04 July 2025 18:12:52 +0000 (0:00:01.038) 0:04:48.422 *********** 2025-07-04 18:14:40.392961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-04 18:14:40.392970 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.392976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-04 18:14:40.392991 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.392998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-04 18:14:40.393003 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.393009 | orchestrator | 2025-07-04 18:14:40.393014 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-07-04 18:14:40.393019 | orchestrator | Friday 04 July 2025 18:12:53 +0000 (0:00:01.353) 0:04:49.775 *********** 2025-07-04 18:14:40.393025 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.393030 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.393035 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.393041 | orchestrator | 2025-07-04 18:14:40.393046 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-04 18:14:40.393051 | orchestrator | Friday 04 July 2025 18:12:55 +0000 (0:00:01.978) 0:04:51.753 *********** 2025-07-04 18:14:40.393057 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.393062 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.393067 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.393073 | orchestrator | 2025-07-04 18:14:40.393078 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-04 18:14:40.393083 | orchestrator | Friday 04 July 2025 18:12:58 +0000 (0:00:02.439) 0:04:54.193 *********** 2025-07-04 18:14:40.393089 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.393094 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.393099 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.393105 | orchestrator | 2025-07-04 18:14:40.393110 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-07-04 18:14:40.393115 | orchestrator | Friday 04 July 2025 18:13:01 +0000 (0:00:03.280) 0:04:57.474 *********** 2025-07-04 18:14:40.393121 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.393126 | orchestrator | 2025-07-04 18:14:40.393131 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-07-04 18:14:40.393137 | orchestrator | Friday 04 July 2025 18:13:02 +0000 (0:00:01.372) 0:04:58.846 *********** 2025-07-04 18:14:40.393145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.393156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-04 18:14:40.393162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.393178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.393184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.393190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-04 18:14:40.393196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.393210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.393216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.393231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.393237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.393243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-04 18:14:40.393249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.393263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.393269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.393274 | orchestrator | 2025-07-04 18:14:40.393280 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-07-04 18:14:40.393285 | orchestrator | Friday 04 July 2025 18:13:07 +0000 (0:00:04.196) 0:05:03.043 *********** 2025-07-04 18:14:40.393301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.393307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-04 18:14:40.393313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.393323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.393331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.393337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.393343 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.393392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-04 18:14:40.393415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.393421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.393431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.393437 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.393446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.393452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-04 18:14:40.393468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.393474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-04 18:14:40.393480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:14:40.393489 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.393495 | orchestrator | 2025-07-04 18:14:40.393500 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-07-04 18:14:40.393506 | orchestrator | Friday 04 July 2025 18:13:07 +0000 (0:00:00.772) 0:05:03.815 *********** 2025-07-04 18:14:40.393511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-04 18:14:40.393517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-04 18:14:40.393522 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.393528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-04 18:14:40.393537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-04 18:14:40.393542 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.393547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-04 18:14:40.393552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-04 18:14:40.393557 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.393561 | orchestrator | 2025-07-04 18:14:40.393566 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-07-04 18:14:40.393571 | orchestrator | Friday 04 July 2025 18:13:09 +0000 (0:00:01.111) 0:05:04.927 *********** 2025-07-04 18:14:40.393576 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.393580 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.393585 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.393590 | orchestrator | 2025-07-04 18:14:40.393595 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-07-04 18:14:40.393600 | orchestrator | Friday 04 July 2025 18:13:10 +0000 (0:00:01.701) 0:05:06.628 *********** 2025-07-04 18:14:40.393604 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.393609 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.393614 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.393619 | orchestrator | 2025-07-04 18:14:40.393624 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-07-04 18:14:40.393628 | orchestrator | Friday 04 July 2025 18:13:12 +0000 (0:00:02.101) 0:05:08.729 *********** 2025-07-04 18:14:40.393633 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.393638 | orchestrator | 2025-07-04 18:14:40.393653 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-07-04 18:14:40.393658 | orchestrator | Friday 04 July 2025 18:13:14 +0000 (0:00:01.403) 0:05:10.133 *********** 2025-07-04 18:14:40.393663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:14:40.393673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:14:40.393678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:14:40.393687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:14:40.393706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:14:40.393716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:14:40.393721 | orchestrator | 2025-07-04 18:14:40.393726 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-07-04 18:14:40.393731 | orchestrator | Friday 04 July 2025 18:13:19 +0000 (0:00:05.444) 0:05:15.578 *********** 2025-07-04 18:14:40.393739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-04 18:14:40.393745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-04 18:14:40.393750 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.393760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-04 18:14:40.393769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-04 18:14:40.393774 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.393779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-04 18:14:40.393789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-04 18:14:40.393794 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.393799 | orchestrator | 2025-07-04 18:14:40.393804 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-07-04 18:14:40.393809 | orchestrator | Friday 04 July 2025 18:13:20 +0000 (0:00:01.062) 0:05:16.640 *********** 2025-07-04 18:14:40.393821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-04 18:14:40.393826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-04 18:14:40.393832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-04 18:14:40.393837 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.393842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-04 18:14:40.393846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-04 18:14:40.393852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-04 18:14:40.393857 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.393861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-04 18:14:40.393866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-04 18:14:40.393871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-04 18:14:40.393876 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.393881 | orchestrator | 2025-07-04 18:14:40.393886 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-07-04 18:14:40.393890 | orchestrator | Friday 04 July 2025 18:13:21 +0000 (0:00:00.939) 0:05:17.579 *********** 2025-07-04 18:14:40.393895 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.393900 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.393905 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.393910 | orchestrator | 2025-07-04 18:14:40.393914 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-07-04 18:14:40.393919 | orchestrator | Friday 04 July 2025 18:13:22 +0000 (0:00:00.445) 0:05:18.025 *********** 2025-07-04 18:14:40.393924 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.393929 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.393933 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.393938 | orchestrator | 2025-07-04 18:14:40.393943 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-07-04 18:14:40.393950 | orchestrator | Friday 04 July 2025 18:13:23 +0000 (0:00:01.400) 0:05:19.426 *********** 2025-07-04 18:14:40.393955 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.393960 | orchestrator | 2025-07-04 18:14:40.393964 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-07-04 18:14:40.393969 | orchestrator | Friday 04 July 2025 18:13:25 +0000 (0:00:01.745) 0:05:21.171 *********** 2025-07-04 18:14:40.393974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-04 18:14:40.393995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:14:40.394001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:14:40.394041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-04 18:14:40.394050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:14:40.394059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:14:40.394085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-04 18:14:40.394090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:14:40.394096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:14:40.394125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-04 18:14:40.394131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-04 18:14:40.394136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-04 18:14:40.394159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-04 18:14:40.394168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-04 18:14:40.394173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-04 18:14:40.394191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-04 18:14:40.394204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-04 18:14:40.394209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-04 18:14:40.394224 | orchestrator | 2025-07-04 18:14:40.394229 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-07-04 18:14:40.394234 | orchestrator | Friday 04 July 2025 18:13:29 +0000 (0:00:04.209) 0:05:25.381 *********** 2025-07-04 18:14:40.394239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-04 18:14:40.394250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:14:40.394255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:14:40.394274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-04 18:14:40.394279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-04 18:14:40.394293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-04 18:14:40.394308 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.394317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-04 18:14:40.394322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:14:40.394327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:14:40.394349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-04 18:14:40.394371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-04 18:14:40.394376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-04 18:14:40.394381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:14:40.394396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-04 18:14:40.394406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394411 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.394454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:14:40.394472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-04 18:14:40.394484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-04 18:14:40.394490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:14:40.394504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-04 18:14:40.394509 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.394514 | orchestrator | 2025-07-04 18:14:40.394519 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-07-04 18:14:40.394524 | orchestrator | Friday 04 July 2025 18:13:30 +0000 (0:00:01.507) 0:05:26.888 *********** 2025-07-04 18:14:40.394529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-04 18:14:40.394534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-04 18:14:40.394543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-04 18:14:40.394549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-04 18:14:40.394554 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.394559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-04 18:14:40.394564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-04 18:14:40.394569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-04 18:14:40.394577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-04 18:14:40.394582 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.394587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-04 18:14:40.394592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-04 18:14:40.394597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-04 18:14:40.394602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-04 18:14:40.394607 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.394612 | orchestrator | 2025-07-04 18:14:40.394616 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-07-04 18:14:40.394621 | orchestrator | Friday 04 July 2025 18:13:32 +0000 (0:00:01.013) 0:05:27.901 *********** 2025-07-04 18:14:40.394626 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.394631 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.394639 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.394643 | orchestrator | 2025-07-04 18:14:40.394648 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-07-04 18:14:40.394653 | orchestrator | Friday 04 July 2025 18:13:32 +0000 (0:00:00.449) 0:05:28.351 *********** 2025-07-04 18:14:40.394658 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.394662 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.394673 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.394677 | orchestrator | 2025-07-04 18:14:40.394682 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-07-04 18:14:40.394687 | orchestrator | Friday 04 July 2025 18:13:34 +0000 (0:00:01.742) 0:05:30.094 *********** 2025-07-04 18:14:40.394692 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.394696 | orchestrator | 2025-07-04 18:14:40.394701 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-07-04 18:14:40.394706 | orchestrator | Friday 04 July 2025 18:13:35 +0000 (0:00:01.801) 0:05:31.895 *********** 2025-07-04 18:14:40.394711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-04 18:14:40.394717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-04 18:14:40.394726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-04 18:14:40.394732 | orchestrator | 2025-07-04 18:14:40.394737 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-07-04 18:14:40.394741 | orchestrator | Friday 04 July 2025 18:13:38 +0000 (0:00:02.567) 0:05:34.463 *********** 2025-07-04 18:14:40.394750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-04 18:14:40.394759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-04 18:14:40.394764 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.394769 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.394777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-04 18:14:40.394783 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.394787 | orchestrator | 2025-07-04 18:14:40.394792 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-07-04 18:14:40.394797 | orchestrator | Friday 04 July 2025 18:13:38 +0000 (0:00:00.381) 0:05:34.845 *********** 2025-07-04 18:14:40.394802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-04 18:14:40.394806 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.394811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-04 18:14:40.394816 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.394821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-04 18:14:40.394829 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.394834 | orchestrator | 2025-07-04 18:14:40.394839 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-07-04 18:14:40.394844 | orchestrator | Friday 04 July 2025 18:13:39 +0000 (0:00:01.036) 0:05:35.881 *********** 2025-07-04 18:14:40.394849 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.394853 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.394858 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.394863 | orchestrator | 2025-07-04 18:14:40.394868 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-07-04 18:14:40.394872 | orchestrator | Friday 04 July 2025 18:13:40 +0000 (0:00:00.470) 0:05:36.352 *********** 2025-07-04 18:14:40.394877 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.394885 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.394889 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.394894 | orchestrator | 2025-07-04 18:14:40.394899 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-07-04 18:14:40.394904 | orchestrator | Friday 04 July 2025 18:13:41 +0000 (0:00:01.336) 0:05:37.688 *********** 2025-07-04 18:14:40.394908 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:14:40.394913 | orchestrator | 2025-07-04 18:14:40.394918 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-07-04 18:14:40.394923 | orchestrator | Friday 04 July 2025 18:13:43 +0000 (0:00:01.787) 0:05:39.476 *********** 2025-07-04 18:14:40.394928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.394933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.394942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.394954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.394960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.394965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-04 18:14:40.394970 | orchestrator | 2025-07-04 18:14:40.394975 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-07-04 18:14:40.394979 | orchestrator | Friday 04 July 2025 18:13:49 +0000 (0:00:06.113) 0:05:45.589 *********** 2025-07-04 18:14:40.394987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.394996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.395004 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.395014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.395019 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.395039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-04 18:14:40.395044 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395048 | orchestrator | 2025-07-04 18:14:40.395053 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-07-04 18:14:40.395058 | orchestrator | Friday 04 July 2025 18:13:50 +0000 (0:00:00.707) 0:05:46.297 *********** 2025-07-04 18:14:40.395066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-04 18:14:40.395071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-04 18:14:40.395076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-04 18:14:40.395081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-04 18:14:40.395086 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-04 18:14:40.395095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-04 18:14:40.395100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-04 18:14:40.395105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-04 18:14:40.395110 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-04 18:14:40.395123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-04 18:14:40.395128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-04 18:14:40.395137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-04 18:14:40.395142 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395146 | orchestrator | 2025-07-04 18:14:40.395151 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-07-04 18:14:40.395156 | orchestrator | Friday 04 July 2025 18:13:52 +0000 (0:00:01.673) 0:05:47.970 *********** 2025-07-04 18:14:40.395161 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.395166 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.395170 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.395175 | orchestrator | 2025-07-04 18:14:40.395180 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-07-04 18:14:40.395184 | orchestrator | Friday 04 July 2025 18:13:53 +0000 (0:00:01.370) 0:05:49.341 *********** 2025-07-04 18:14:40.395189 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.395194 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.395199 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.395203 | orchestrator | 2025-07-04 18:14:40.395208 | orchestrator | TASK [include_role : swift] **************************************************** 2025-07-04 18:14:40.395213 | orchestrator | Friday 04 July 2025 18:13:55 +0000 (0:00:02.187) 0:05:51.528 *********** 2025-07-04 18:14:40.395218 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395222 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395227 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395232 | orchestrator | 2025-07-04 18:14:40.395237 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-07-04 18:14:40.395241 | orchestrator | Friday 04 July 2025 18:13:55 +0000 (0:00:00.353) 0:05:51.882 *********** 2025-07-04 18:14:40.395246 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395251 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395255 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395260 | orchestrator | 2025-07-04 18:14:40.395265 | orchestrator | TASK [include_role : trove] **************************************************** 2025-07-04 18:14:40.395269 | orchestrator | Friday 04 July 2025 18:13:56 +0000 (0:00:00.648) 0:05:52.531 *********** 2025-07-04 18:14:40.395274 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395279 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395286 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395291 | orchestrator | 2025-07-04 18:14:40.395296 | orchestrator | TASK [include_role : venus] **************************************************** 2025-07-04 18:14:40.395301 | orchestrator | Friday 04 July 2025 18:13:56 +0000 (0:00:00.323) 0:05:52.855 *********** 2025-07-04 18:14:40.395305 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395310 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395315 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395319 | orchestrator | 2025-07-04 18:14:40.395324 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-07-04 18:14:40.395329 | orchestrator | Friday 04 July 2025 18:13:57 +0000 (0:00:00.310) 0:05:53.165 *********** 2025-07-04 18:14:40.395333 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395338 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395343 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395348 | orchestrator | 2025-07-04 18:14:40.395352 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-07-04 18:14:40.395372 | orchestrator | Friday 04 July 2025 18:13:57 +0000 (0:00:00.328) 0:05:53.494 *********** 2025-07-04 18:14:40.395377 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395381 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395386 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395391 | orchestrator | 2025-07-04 18:14:40.395396 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-07-04 18:14:40.395400 | orchestrator | Friday 04 July 2025 18:13:58 +0000 (0:00:00.835) 0:05:54.329 *********** 2025-07-04 18:14:40.395405 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.395410 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.395415 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.395420 | orchestrator | 2025-07-04 18:14:40.395425 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-07-04 18:14:40.395429 | orchestrator | Friday 04 July 2025 18:13:59 +0000 (0:00:00.706) 0:05:55.035 *********** 2025-07-04 18:14:40.395434 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.395439 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.395444 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.395448 | orchestrator | 2025-07-04 18:14:40.395453 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-07-04 18:14:40.395458 | orchestrator | Friday 04 July 2025 18:13:59 +0000 (0:00:00.342) 0:05:55.378 *********** 2025-07-04 18:14:40.395463 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.395468 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.395472 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.395477 | orchestrator | 2025-07-04 18:14:40.395482 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-07-04 18:14:40.395487 | orchestrator | Friday 04 July 2025 18:14:00 +0000 (0:00:01.282) 0:05:56.661 *********** 2025-07-04 18:14:40.395491 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.395496 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.395501 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.395505 | orchestrator | 2025-07-04 18:14:40.395510 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-07-04 18:14:40.395515 | orchestrator | Friday 04 July 2025 18:14:01 +0000 (0:00:00.941) 0:05:57.602 *********** 2025-07-04 18:14:40.395520 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.395525 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.395529 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.395534 | orchestrator | 2025-07-04 18:14:40.395539 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-07-04 18:14:40.395544 | orchestrator | Friday 04 July 2025 18:14:02 +0000 (0:00:00.917) 0:05:58.520 *********** 2025-07-04 18:14:40.395548 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.395553 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.395558 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.395563 | orchestrator | 2025-07-04 18:14:40.395567 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-07-04 18:14:40.395572 | orchestrator | Friday 04 July 2025 18:14:12 +0000 (0:00:09.491) 0:06:08.012 *********** 2025-07-04 18:14:40.395577 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.395582 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.395586 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.395591 | orchestrator | 2025-07-04 18:14:40.395599 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-07-04 18:14:40.395604 | orchestrator | Friday 04 July 2025 18:14:12 +0000 (0:00:00.716) 0:06:08.729 *********** 2025-07-04 18:14:40.395609 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.395613 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.395618 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.395623 | orchestrator | 2025-07-04 18:14:40.395628 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-07-04 18:14:40.395633 | orchestrator | Friday 04 July 2025 18:14:21 +0000 (0:00:08.453) 0:06:17.182 *********** 2025-07-04 18:14:40.395641 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.395645 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.395650 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.395655 | orchestrator | 2025-07-04 18:14:40.395660 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-07-04 18:14:40.395665 | orchestrator | Friday 04 July 2025 18:14:26 +0000 (0:00:04.774) 0:06:21.957 *********** 2025-07-04 18:14:40.395669 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:14:40.395674 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:14:40.395679 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:14:40.395684 | orchestrator | 2025-07-04 18:14:40.395689 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-07-04 18:14:40.395693 | orchestrator | Friday 04 July 2025 18:14:30 +0000 (0:00:04.437) 0:06:26.394 *********** 2025-07-04 18:14:40.395698 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395703 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395708 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395712 | orchestrator | 2025-07-04 18:14:40.395717 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-07-04 18:14:40.395722 | orchestrator | Friday 04 July 2025 18:14:30 +0000 (0:00:00.336) 0:06:26.731 *********** 2025-07-04 18:14:40.395727 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395731 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395736 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395741 | orchestrator | 2025-07-04 18:14:40.395749 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-07-04 18:14:40.395754 | orchestrator | Friday 04 July 2025 18:14:31 +0000 (0:00:00.734) 0:06:27.466 *********** 2025-07-04 18:14:40.395758 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395763 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395768 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395773 | orchestrator | 2025-07-04 18:14:40.395778 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-07-04 18:14:40.395783 | orchestrator | Friday 04 July 2025 18:14:31 +0000 (0:00:00.347) 0:06:27.814 *********** 2025-07-04 18:14:40.395787 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395792 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395797 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395802 | orchestrator | 2025-07-04 18:14:40.395806 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-07-04 18:14:40.395811 | orchestrator | Friday 04 July 2025 18:14:32 +0000 (0:00:00.330) 0:06:28.144 *********** 2025-07-04 18:14:40.395816 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395821 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395825 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395830 | orchestrator | 2025-07-04 18:14:40.395835 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-07-04 18:14:40.395840 | orchestrator | Friday 04 July 2025 18:14:32 +0000 (0:00:00.362) 0:06:28.507 *********** 2025-07-04 18:14:40.395845 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:14:40.395849 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:14:40.395854 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:14:40.395859 | orchestrator | 2025-07-04 18:14:40.395864 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-07-04 18:14:40.395869 | orchestrator | Friday 04 July 2025 18:14:33 +0000 (0:00:00.705) 0:06:29.212 *********** 2025-07-04 18:14:40.395873 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.395878 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.395883 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.395887 | orchestrator | 2025-07-04 18:14:40.395892 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-07-04 18:14:40.395897 | orchestrator | Friday 04 July 2025 18:14:38 +0000 (0:00:04.858) 0:06:34.071 *********** 2025-07-04 18:14:40.395902 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:14:40.395907 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:14:40.395916 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:14:40.395921 | orchestrator | 2025-07-04 18:14:40.395925 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:14:40.395930 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-04 18:14:40.395935 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-04 18:14:40.395940 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-04 18:14:40.395945 | orchestrator | 2025-07-04 18:14:40.395950 | orchestrator | 2025-07-04 18:14:40.395955 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:14:40.395959 | orchestrator | Friday 04 July 2025 18:14:38 +0000 (0:00:00.823) 0:06:34.895 *********** 2025-07-04 18:14:40.395964 | orchestrator | =============================================================================== 2025-07-04 18:14:40.395969 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.49s 2025-07-04 18:14:40.395974 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.45s 2025-07-04 18:14:40.395978 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 8.43s 2025-07-04 18:14:40.395986 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 7.47s 2025-07-04 18:14:40.395991 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.11s 2025-07-04 18:14:40.395996 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.44s 2025-07-04 18:14:40.396001 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.86s 2025-07-04 18:14:40.396006 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.77s 2025-07-04 18:14:40.396011 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.74s 2025-07-04 18:14:40.396015 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.61s 2025-07-04 18:14:40.396020 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.53s 2025-07-04 18:14:40.396025 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 4.44s 2025-07-04 18:14:40.396030 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.44s 2025-07-04 18:14:40.396034 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.37s 2025-07-04 18:14:40.396039 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.31s 2025-07-04 18:14:40.396044 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.21s 2025-07-04 18:14:40.396048 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.20s 2025-07-04 18:14:40.396053 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.13s 2025-07-04 18:14:40.396058 | orchestrator | loadbalancer : Check loadbalancer containers ---------------------------- 4.09s 2025-07-04 18:14:40.396063 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.95s 2025-07-04 18:14:43.433220 | orchestrator | 2025-07-04 18:14:43 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:14:43.434078 | orchestrator | 2025-07-04 18:14:43 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:14:43.435821 | orchestrator | 2025-07-04 18:14:43 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:43.435900 | orchestrator | 2025-07-04 18:14:43 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:46.476495 | orchestrator | 2025-07-04 18:14:46 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:14:46.479897 | orchestrator | 2025-07-04 18:14:46 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:14:46.485765 | orchestrator | 2025-07-04 18:14:46 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:46.485855 | orchestrator | 2025-07-04 18:14:46 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:49.534488 | orchestrator | 2025-07-04 18:14:49 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:14:49.535112 | orchestrator | 2025-07-04 18:14:49 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:14:49.536043 | orchestrator | 2025-07-04 18:14:49 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:49.536065 | orchestrator | 2025-07-04 18:14:49 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:52.583654 | orchestrator | 2025-07-04 18:14:52 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:14:52.583754 | orchestrator | 2025-07-04 18:14:52 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:14:52.585568 | orchestrator | 2025-07-04 18:14:52 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:52.585625 | orchestrator | 2025-07-04 18:14:52 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:55.620058 | orchestrator | 2025-07-04 18:14:55 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:14:55.620887 | orchestrator | 2025-07-04 18:14:55 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:14:55.621644 | orchestrator | 2025-07-04 18:14:55 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:55.621818 | orchestrator | 2025-07-04 18:14:55 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:14:58.662555 | orchestrator | 2025-07-04 18:14:58 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:14:58.662894 | orchestrator | 2025-07-04 18:14:58 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:14:58.663720 | orchestrator | 2025-07-04 18:14:58 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:14:58.663756 | orchestrator | 2025-07-04 18:14:58 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:01.713919 | orchestrator | 2025-07-04 18:15:01 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:01.715254 | orchestrator | 2025-07-04 18:15:01 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:01.716323 | orchestrator | 2025-07-04 18:15:01 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:01.716387 | orchestrator | 2025-07-04 18:15:01 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:04.751284 | orchestrator | 2025-07-04 18:15:04 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:04.752008 | orchestrator | 2025-07-04 18:15:04 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:04.758289 | orchestrator | 2025-07-04 18:15:04 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:04.758414 | orchestrator | 2025-07-04 18:15:04 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:07.793797 | orchestrator | 2025-07-04 18:15:07 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:07.794122 | orchestrator | 2025-07-04 18:15:07 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:07.795174 | orchestrator | 2025-07-04 18:15:07 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:07.796057 | orchestrator | 2025-07-04 18:15:07 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:10.850862 | orchestrator | 2025-07-04 18:15:10 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:10.851794 | orchestrator | 2025-07-04 18:15:10 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:10.853285 | orchestrator | 2025-07-04 18:15:10 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:10.853380 | orchestrator | 2025-07-04 18:15:10 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:13.902774 | orchestrator | 2025-07-04 18:15:13 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:13.904375 | orchestrator | 2025-07-04 18:15:13 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:13.905805 | orchestrator | 2025-07-04 18:15:13 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:13.905838 | orchestrator | 2025-07-04 18:15:13 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:16.959156 | orchestrator | 2025-07-04 18:15:16 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:16.960065 | orchestrator | 2025-07-04 18:15:16 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:16.961548 | orchestrator | 2025-07-04 18:15:16 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:16.962189 | orchestrator | 2025-07-04 18:15:16 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:20.011830 | orchestrator | 2025-07-04 18:15:20 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:20.016434 | orchestrator | 2025-07-04 18:15:20 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:20.022514 | orchestrator | 2025-07-04 18:15:20 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:20.022585 | orchestrator | 2025-07-04 18:15:20 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:23.064799 | orchestrator | 2025-07-04 18:15:23 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:23.066522 | orchestrator | 2025-07-04 18:15:23 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:23.068199 | orchestrator | 2025-07-04 18:15:23 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:23.068234 | orchestrator | 2025-07-04 18:15:23 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:26.115400 | orchestrator | 2025-07-04 18:15:26 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:26.117880 | orchestrator | 2025-07-04 18:15:26 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:26.119327 | orchestrator | 2025-07-04 18:15:26 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:26.119633 | orchestrator | 2025-07-04 18:15:26 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:29.174278 | orchestrator | 2025-07-04 18:15:29 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:29.177947 | orchestrator | 2025-07-04 18:15:29 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:29.180502 | orchestrator | 2025-07-04 18:15:29 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:29.180560 | orchestrator | 2025-07-04 18:15:29 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:32.222599 | orchestrator | 2025-07-04 18:15:32 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:32.224575 | orchestrator | 2025-07-04 18:15:32 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:32.226631 | orchestrator | 2025-07-04 18:15:32 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:32.226691 | orchestrator | 2025-07-04 18:15:32 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:35.283802 | orchestrator | 2025-07-04 18:15:35 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:35.284668 | orchestrator | 2025-07-04 18:15:35 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:35.289422 | orchestrator | 2025-07-04 18:15:35 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:35.289477 | orchestrator | 2025-07-04 18:15:35 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:38.342695 | orchestrator | 2025-07-04 18:15:38 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:38.344050 | orchestrator | 2025-07-04 18:15:38 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:38.344738 | orchestrator | 2025-07-04 18:15:38 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:38.344777 | orchestrator | 2025-07-04 18:15:38 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:41.403824 | orchestrator | 2025-07-04 18:15:41 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:41.406160 | orchestrator | 2025-07-04 18:15:41 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:41.409307 | orchestrator | 2025-07-04 18:15:41 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:41.409387 | orchestrator | 2025-07-04 18:15:41 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:44.465763 | orchestrator | 2025-07-04 18:15:44 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:44.468086 | orchestrator | 2025-07-04 18:15:44 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:44.469958 | orchestrator | 2025-07-04 18:15:44 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:44.469991 | orchestrator | 2025-07-04 18:15:44 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:47.522599 | orchestrator | 2025-07-04 18:15:47 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:47.523879 | orchestrator | 2025-07-04 18:15:47 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:47.526240 | orchestrator | 2025-07-04 18:15:47 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:47.526654 | orchestrator | 2025-07-04 18:15:47 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:50.585772 | orchestrator | 2025-07-04 18:15:50 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:50.587450 | orchestrator | 2025-07-04 18:15:50 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:50.589496 | orchestrator | 2025-07-04 18:15:50 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:50.589583 | orchestrator | 2025-07-04 18:15:50 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:53.640001 | orchestrator | 2025-07-04 18:15:53 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:53.641944 | orchestrator | 2025-07-04 18:15:53 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:53.643695 | orchestrator | 2025-07-04 18:15:53 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:53.644135 | orchestrator | 2025-07-04 18:15:53 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:56.693201 | orchestrator | 2025-07-04 18:15:56 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:56.694622 | orchestrator | 2025-07-04 18:15:56 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:56.697061 | orchestrator | 2025-07-04 18:15:56 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:56.697116 | orchestrator | 2025-07-04 18:15:56 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:15:59.744815 | orchestrator | 2025-07-04 18:15:59 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:15:59.744915 | orchestrator | 2025-07-04 18:15:59 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:15:59.744937 | orchestrator | 2025-07-04 18:15:59 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:15:59.744956 | orchestrator | 2025-07-04 18:15:59 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:02.790593 | orchestrator | 2025-07-04 18:16:02 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:02.791582 | orchestrator | 2025-07-04 18:16:02 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:02.793110 | orchestrator | 2025-07-04 18:16:02 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:02.793165 | orchestrator | 2025-07-04 18:16:02 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:05.849116 | orchestrator | 2025-07-04 18:16:05 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:05.853468 | orchestrator | 2025-07-04 18:16:05 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:05.855379 | orchestrator | 2025-07-04 18:16:05 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:05.855440 | orchestrator | 2025-07-04 18:16:05 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:08.908373 | orchestrator | 2025-07-04 18:16:08 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:08.909465 | orchestrator | 2025-07-04 18:16:08 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:08.911974 | orchestrator | 2025-07-04 18:16:08 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:08.912081 | orchestrator | 2025-07-04 18:16:08 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:11.960483 | orchestrator | 2025-07-04 18:16:11 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:11.961621 | orchestrator | 2025-07-04 18:16:11 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:11.962817 | orchestrator | 2025-07-04 18:16:11 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:11.962874 | orchestrator | 2025-07-04 18:16:11 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:15.001438 | orchestrator | 2025-07-04 18:16:14 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:15.003145 | orchestrator | 2025-07-04 18:16:15 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:15.008616 | orchestrator | 2025-07-04 18:16:15 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:15.008708 | orchestrator | 2025-07-04 18:16:15 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:18.051645 | orchestrator | 2025-07-04 18:16:18 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:18.053285 | orchestrator | 2025-07-04 18:16:18 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:18.057217 | orchestrator | 2025-07-04 18:16:18 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:18.057277 | orchestrator | 2025-07-04 18:16:18 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:21.095821 | orchestrator | 2025-07-04 18:16:21 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:21.097632 | orchestrator | 2025-07-04 18:16:21 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:21.099606 | orchestrator | 2025-07-04 18:16:21 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:21.099708 | orchestrator | 2025-07-04 18:16:21 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:24.152398 | orchestrator | 2025-07-04 18:16:24 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:24.155103 | orchestrator | 2025-07-04 18:16:24 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:24.155724 | orchestrator | 2025-07-04 18:16:24 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:24.155773 | orchestrator | 2025-07-04 18:16:24 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:27.197740 | orchestrator | 2025-07-04 18:16:27 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:27.198424 | orchestrator | 2025-07-04 18:16:27 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:27.200166 | orchestrator | 2025-07-04 18:16:27 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:27.200260 | orchestrator | 2025-07-04 18:16:27 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:30.249683 | orchestrator | 2025-07-04 18:16:30 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:30.250550 | orchestrator | 2025-07-04 18:16:30 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:30.253997 | orchestrator | 2025-07-04 18:16:30 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:30.254159 | orchestrator | 2025-07-04 18:16:30 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:33.301447 | orchestrator | 2025-07-04 18:16:33 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:33.304203 | orchestrator | 2025-07-04 18:16:33 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:33.307168 | orchestrator | 2025-07-04 18:16:33 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:33.307641 | orchestrator | 2025-07-04 18:16:33 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:36.348426 | orchestrator | 2025-07-04 18:16:36 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:36.350575 | orchestrator | 2025-07-04 18:16:36 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:36.354174 | orchestrator | 2025-07-04 18:16:36 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:36.354365 | orchestrator | 2025-07-04 18:16:36 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:39.408105 | orchestrator | 2025-07-04 18:16:39 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:39.409936 | orchestrator | 2025-07-04 18:16:39 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:39.411975 | orchestrator | 2025-07-04 18:16:39 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:39.412302 | orchestrator | 2025-07-04 18:16:39 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:42.456902 | orchestrator | 2025-07-04 18:16:42 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:42.458523 | orchestrator | 2025-07-04 18:16:42 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:42.460424 | orchestrator | 2025-07-04 18:16:42 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:42.460470 | orchestrator | 2025-07-04 18:16:42 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:45.510616 | orchestrator | 2025-07-04 18:16:45 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:45.513490 | orchestrator | 2025-07-04 18:16:45 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:45.513570 | orchestrator | 2025-07-04 18:16:45 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:45.513586 | orchestrator | 2025-07-04 18:16:45 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:48.560343 | orchestrator | 2025-07-04 18:16:48 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:48.565320 | orchestrator | 2025-07-04 18:16:48 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:48.567887 | orchestrator | 2025-07-04 18:16:48 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:48.568526 | orchestrator | 2025-07-04 18:16:48 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:51.628120 | orchestrator | 2025-07-04 18:16:51 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:51.630196 | orchestrator | 2025-07-04 18:16:51 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:51.632072 | orchestrator | 2025-07-04 18:16:51 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:51.632118 | orchestrator | 2025-07-04 18:16:51 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:54.683583 | orchestrator | 2025-07-04 18:16:54 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:54.684057 | orchestrator | 2025-07-04 18:16:54 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:54.686013 | orchestrator | 2025-07-04 18:16:54 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:54.686299 | orchestrator | 2025-07-04 18:16:54 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:16:57.746134 | orchestrator | 2025-07-04 18:16:57 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:16:57.748091 | orchestrator | 2025-07-04 18:16:57 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:16:57.750064 | orchestrator | 2025-07-04 18:16:57 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:16:57.750114 | orchestrator | 2025-07-04 18:16:57 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:00.808291 | orchestrator | 2025-07-04 18:17:00 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:00.810517 | orchestrator | 2025-07-04 18:17:00 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:17:00.812963 | orchestrator | 2025-07-04 18:17:00 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:17:00.813397 | orchestrator | 2025-07-04 18:17:00 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:03.859737 | orchestrator | 2025-07-04 18:17:03 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:03.861340 | orchestrator | 2025-07-04 18:17:03 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:17:03.863866 | orchestrator | 2025-07-04 18:17:03 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:17:03.864156 | orchestrator | 2025-07-04 18:17:03 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:06.910429 | orchestrator | 2025-07-04 18:17:06 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:06.913441 | orchestrator | 2025-07-04 18:17:06 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:17:06.916077 | orchestrator | 2025-07-04 18:17:06 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:17:06.916134 | orchestrator | 2025-07-04 18:17:06 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:09.959835 | orchestrator | 2025-07-04 18:17:09 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:09.962164 | orchestrator | 2025-07-04 18:17:09 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:17:09.964639 | orchestrator | 2025-07-04 18:17:09 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:17:09.964954 | orchestrator | 2025-07-04 18:17:09 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:13.011635 | orchestrator | 2025-07-04 18:17:13 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:13.013403 | orchestrator | 2025-07-04 18:17:13 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:17:13.015283 | orchestrator | 2025-07-04 18:17:13 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:17:13.015334 | orchestrator | 2025-07-04 18:17:13 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:16.057284 | orchestrator | 2025-07-04 18:17:16 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:16.059258 | orchestrator | 2025-07-04 18:17:16 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:17:16.061962 | orchestrator | 2025-07-04 18:17:16 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:17:16.061989 | orchestrator | 2025-07-04 18:17:16 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:19.106729 | orchestrator | 2025-07-04 18:17:19 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:19.108296 | orchestrator | 2025-07-04 18:17:19 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:17:19.110353 | orchestrator | 2025-07-04 18:17:19 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:17:19.110449 | orchestrator | 2025-07-04 18:17:19 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:22.160416 | orchestrator | 2025-07-04 18:17:22 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:22.162324 | orchestrator | 2025-07-04 18:17:22 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:17:22.163984 | orchestrator | 2025-07-04 18:17:22 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:17:22.164010 | orchestrator | 2025-07-04 18:17:22 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:25.201948 | orchestrator | 2025-07-04 18:17:25 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:25.203396 | orchestrator | 2025-07-04 18:17:25 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:17:25.204335 | orchestrator | 2025-07-04 18:17:25 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:17:25.206088 | orchestrator | 2025-07-04 18:17:25 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:28.247526 | orchestrator | 2025-07-04 18:17:28 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:28.251016 | orchestrator | 2025-07-04 18:17:28 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:17:28.255997 | orchestrator | 2025-07-04 18:17:28 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:17:28.256069 | orchestrator | 2025-07-04 18:17:28 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:31.302742 | orchestrator | 2025-07-04 18:17:31 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:31.304010 | orchestrator | 2025-07-04 18:17:31 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:17:31.305366 | orchestrator | 2025-07-04 18:17:31 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:17:31.305396 | orchestrator | 2025-07-04 18:17:31 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:34.354943 | orchestrator | 2025-07-04 18:17:34 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:34.356910 | orchestrator | 2025-07-04 18:17:34 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:17:34.360084 | orchestrator | 2025-07-04 18:17:34 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state STARTED 2025-07-04 18:17:34.360218 | orchestrator | 2025-07-04 18:17:34 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:37.418434 | orchestrator | 2025-07-04 18:17:37 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:17:37.418535 | orchestrator | 2025-07-04 18:17:37 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:37.418550 | orchestrator | 2025-07-04 18:17:37 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state STARTED 2025-07-04 18:17:37.421429 | orchestrator | 2025-07-04 18:17:37 | INFO  | Task 02ae7d3b-ce19-41e0-b152-00c2d119a997 is in state SUCCESS 2025-07-04 18:17:37.423147 | orchestrator | 2025-07-04 18:17:37.423342 | orchestrator | 2025-07-04 18:17:37.423520 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-07-04 18:17:37.423547 | orchestrator | 2025-07-04 18:17:37.423564 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-04 18:17:37.423637 | orchestrator | Friday 04 July 2025 18:04:39 +0000 (0:00:00.997) 0:00:00.997 *********** 2025-07-04 18:17:37.423674 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.423696 | orchestrator | 2025-07-04 18:17:37.423714 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-04 18:17:37.423809 | orchestrator | Friday 04 July 2025 18:04:41 +0000 (0:00:01.590) 0:00:02.587 *********** 2025-07-04 18:17:37.423824 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.423958 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.423983 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.424004 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.424025 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.424044 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.424064 | orchestrator | 2025-07-04 18:17:37.424084 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-04 18:17:37.424124 | orchestrator | Friday 04 July 2025 18:04:43 +0000 (0:00:02.164) 0:00:04.751 *********** 2025-07-04 18:17:37.424141 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.424152 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.424356 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.424370 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.424381 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.424391 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.424402 | orchestrator | 2025-07-04 18:17:37.424413 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-04 18:17:37.424459 | orchestrator | Friday 04 July 2025 18:04:44 +0000 (0:00:00.839) 0:00:05.591 *********** 2025-07-04 18:17:37.424514 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.424525 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.424548 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.424569 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.424580 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.424590 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.424601 | orchestrator | 2025-07-04 18:17:37.424611 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-04 18:17:37.424628 | orchestrator | Friday 04 July 2025 18:04:45 +0000 (0:00:00.933) 0:00:06.525 *********** 2025-07-04 18:17:37.424647 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.424666 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.424685 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.424703 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.424723 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.424743 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.424762 | orchestrator | 2025-07-04 18:17:37.424781 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-04 18:17:37.424800 | orchestrator | Friday 04 July 2025 18:04:46 +0000 (0:00:00.752) 0:00:07.278 *********** 2025-07-04 18:17:37.424820 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.424839 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.425234 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.425248 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.425258 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.425269 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.425280 | orchestrator | 2025-07-04 18:17:37.425291 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-04 18:17:37.425302 | orchestrator | Friday 04 July 2025 18:04:46 +0000 (0:00:00.685) 0:00:07.963 *********** 2025-07-04 18:17:37.425312 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.425323 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.425333 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.425344 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.425355 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.425365 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.425376 | orchestrator | 2025-07-04 18:17:37.425387 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-04 18:17:37.425415 | orchestrator | Friday 04 July 2025 18:04:47 +0000 (0:00:01.052) 0:00:09.016 *********** 2025-07-04 18:17:37.425426 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.425439 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.425449 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.425460 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.425470 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.425481 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.425491 | orchestrator | 2025-07-04 18:17:37.425501 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-04 18:17:37.425512 | orchestrator | Friday 04 July 2025 18:04:48 +0000 (0:00:00.966) 0:00:09.983 *********** 2025-07-04 18:17:37.425536 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.425601 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.425706 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.425728 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.425747 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.425759 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.425769 | orchestrator | 2025-07-04 18:17:37.425780 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-04 18:17:37.425791 | orchestrator | Friday 04 July 2025 18:04:50 +0000 (0:00:01.083) 0:00:11.066 *********** 2025-07-04 18:17:37.425802 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-04 18:17:37.425813 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-04 18:17:37.425823 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-04 18:17:37.425834 | orchestrator | 2025-07-04 18:17:37.425844 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-04 18:17:37.425855 | orchestrator | Friday 04 July 2025 18:04:50 +0000 (0:00:00.789) 0:00:11.856 *********** 2025-07-04 18:17:37.425865 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.425876 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.425886 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.425897 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.425947 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.425960 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.425970 | orchestrator | 2025-07-04 18:17:37.426000 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-04 18:17:37.426011 | orchestrator | Friday 04 July 2025 18:04:51 +0000 (0:00:00.893) 0:00:12.749 *********** 2025-07-04 18:17:37.426332 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-04 18:17:37.426344 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-04 18:17:37.426355 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-04 18:17:37.426365 | orchestrator | 2025-07-04 18:17:37.426376 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-04 18:17:37.426387 | orchestrator | Friday 04 July 2025 18:04:54 +0000 (0:00:02.975) 0:00:15.724 *********** 2025-07-04 18:17:37.426410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-04 18:17:37.426421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-04 18:17:37.426446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-04 18:17:37.426457 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.426468 | orchestrator | 2025-07-04 18:17:37.426478 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-04 18:17:37.426499 | orchestrator | Friday 04 July 2025 18:04:55 +0000 (0:00:01.176) 0:00:16.901 *********** 2025-07-04 18:17:37.426512 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.426538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.426549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.426560 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.426571 | orchestrator | 2025-07-04 18:17:37.426582 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-04 18:17:37.426592 | orchestrator | Friday 04 July 2025 18:04:56 +0000 (0:00:00.936) 0:00:17.837 *********** 2025-07-04 18:17:37.426606 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.426619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.426631 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.426642 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.426653 | orchestrator | 2025-07-04 18:17:37.426708 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-04 18:17:37.426720 | orchestrator | Friday 04 July 2025 18:04:57 +0000 (0:00:00.343) 0:00:18.181 *********** 2025-07-04 18:17:37.426872 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-04 18:04:52.310639', 'end': '2025-07-04 18:04:52.567584', 'delta': '0:00:00.256945', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.426890 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-04 18:04:53.393058', 'end': '2025-07-04 18:04:53.700104', 'delta': '0:00:00.307046', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.426916 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-04 18:04:54.235125', 'end': '2025-07-04 18:04:54.528568', 'delta': '0:00:00.293443', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.426928 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.426939 | orchestrator | 2025-07-04 18:17:37.426949 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-04 18:17:37.426961 | orchestrator | Friday 04 July 2025 18:04:57 +0000 (0:00:00.377) 0:00:18.558 *********** 2025-07-04 18:17:37.426972 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.426983 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.426994 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.427036 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.427047 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.427058 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.427068 | orchestrator | 2025-07-04 18:17:37.427079 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-04 18:17:37.427090 | orchestrator | Friday 04 July 2025 18:04:59 +0000 (0:00:01.868) 0:00:20.427 *********** 2025-07-04 18:17:37.427101 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:17:37.427111 | orchestrator | 2025-07-04 18:17:37.427122 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-04 18:17:37.427133 | orchestrator | Friday 04 July 2025 18:05:00 +0000 (0:00:00.960) 0:00:21.387 *********** 2025-07-04 18:17:37.427144 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.427155 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.427185 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.427196 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.427207 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.427217 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.427228 | orchestrator | 2025-07-04 18:17:37.427238 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-04 18:17:37.427249 | orchestrator | Friday 04 July 2025 18:05:01 +0000 (0:00:01.101) 0:00:22.489 *********** 2025-07-04 18:17:37.427272 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.427283 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.427294 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.427305 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.427315 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.427326 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.427336 | orchestrator | 2025-07-04 18:17:37.427347 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-04 18:17:37.427357 | orchestrator | Friday 04 July 2025 18:05:02 +0000 (0:00:01.103) 0:00:23.593 *********** 2025-07-04 18:17:37.427368 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.427379 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.427389 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.427400 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.427410 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.427421 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.427431 | orchestrator | 2025-07-04 18:17:37.427442 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-04 18:17:37.427453 | orchestrator | Friday 04 July 2025 18:05:03 +0000 (0:00:01.380) 0:00:24.973 *********** 2025-07-04 18:17:37.427463 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.427481 | orchestrator | 2025-07-04 18:17:37.427492 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-04 18:17:37.427503 | orchestrator | Friday 04 July 2025 18:05:04 +0000 (0:00:00.176) 0:00:25.150 *********** 2025-07-04 18:17:37.427513 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.427524 | orchestrator | 2025-07-04 18:17:37.427535 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-04 18:17:37.427545 | orchestrator | Friday 04 July 2025 18:05:04 +0000 (0:00:00.377) 0:00:25.528 *********** 2025-07-04 18:17:37.427556 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.427566 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.427577 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.427588 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.427598 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.427609 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.427620 | orchestrator | 2025-07-04 18:17:37.427649 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-04 18:17:37.427669 | orchestrator | Friday 04 July 2025 18:05:05 +0000 (0:00:01.254) 0:00:26.783 *********** 2025-07-04 18:17:37.427688 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.427708 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.427729 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.427747 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.427769 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.427788 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.427809 | orchestrator | 2025-07-04 18:17:37.427823 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-04 18:17:37.427834 | orchestrator | Friday 04 July 2025 18:05:07 +0000 (0:00:01.845) 0:00:28.629 *********** 2025-07-04 18:17:37.427844 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.427855 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.427865 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.427876 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.427887 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.427897 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.427908 | orchestrator | 2025-07-04 18:17:37.427918 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-04 18:17:37.427936 | orchestrator | Friday 04 July 2025 18:05:08 +0000 (0:00:01.184) 0:00:29.813 *********** 2025-07-04 18:17:37.427947 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.427957 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.427968 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.427978 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.427989 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.427999 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.428010 | orchestrator | 2025-07-04 18:17:37.428021 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-04 18:17:37.428031 | orchestrator | Friday 04 July 2025 18:05:09 +0000 (0:00:00.963) 0:00:30.776 *********** 2025-07-04 18:17:37.428042 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.428053 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.428063 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.428074 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.428084 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.428095 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.428105 | orchestrator | 2025-07-04 18:17:37.428116 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-04 18:17:37.428126 | orchestrator | Friday 04 July 2025 18:05:10 +0000 (0:00:00.847) 0:00:31.624 *********** 2025-07-04 18:17:37.428137 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.428148 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.428178 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.428189 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.428208 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.428219 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.428229 | orchestrator | 2025-07-04 18:17:37.428240 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-04 18:17:37.428265 | orchestrator | Friday 04 July 2025 18:05:11 +0000 (0:00:01.293) 0:00:32.917 *********** 2025-07-04 18:17:37.428276 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.428287 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.428298 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.428308 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.428319 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.428341 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.428352 | orchestrator | 2025-07-04 18:17:37.428363 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-04 18:17:37.428373 | orchestrator | Friday 04 July 2025 18:05:12 +0000 (0:00:00.918) 0:00:33.835 *********** 2025-07-04 18:17:37.428386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36-osd--block--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36', 'dm-uuid-LVM-B3Y3vVt13oq7W12qJO9i0i6uep7VTFvfcGfEXbvVV6V6O7RTne1vNFxTHUmPjFQE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50c65579--7f86--5010--a824--2221e6b8d3f0-osd--block--50c65579--7f86--5010--a824--2221e6b8d3f0', 'dm-uuid-LVM-ZG3XWcaXjrAvYnnejqBIYJ0ciDWIs1Csg0ixG4tV2ItMKloRzAL7LZn9V6kamgcP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.428572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36-osd--block--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YCSxTc-76yS-14eo-2p4B-V1C4-kRp3-c6rJuf', 'scsi-0QEMU_QEMU_HARDDISK_f1ee158f-8183-4691-b988-cdb0b3746d63', 'scsi-SQEMU_QEMU_HARDDISK_f1ee158f-8183-4691-b988-cdb0b3746d63'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.428591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--50c65579--7f86--5010--a824--2221e6b8d3f0-osd--block--50c65579--7f86--5010--a824--2221e6b8d3f0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HYJGol-8JGG-VzqY-74tM-pxx0-LLRX-2M1TEb', 'scsi-0QEMU_QEMU_HARDDISK_cc10544f-afe1-4b17-ac35-d479dbd44023', 'scsi-SQEMU_QEMU_HARDDISK_cc10544f-afe1-4b17-ac35-d479dbd44023'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.428602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c11b362--ac03--5009--be6f--11a9ef5f18dc-osd--block--0c11b362--ac03--5009--be6f--11a9ef5f18dc', 'dm-uuid-LVM-85sUfP606lq7Q3qlcfR1IFsiywW560yhDPu2j2sIi3qGNQFOf68DzrNES5iiIM0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b396848d--3790--5c5a--8f8a--1e47b4270a43-osd--block--b396848d--3790--5c5a--8f8a--1e47b4270a43', 'dm-uuid-LVM-xMS0sPfwvCdF5iP1LiDfuNYivGcUuqa86TUlwLmuLCuTRsoOe4i32c1w7HKJVWmz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c678ea0e-f232-4db4-9458-94e4077f665f', 'scsi-SQEMU_QEMU_HARDDISK_c678ea0e-f232-4db4-9458-94e4077f665f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.428646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-25-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.428669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part1', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part14', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part15', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part16', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.428798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0c11b362--ac03--5009--be6f--11a9ef5f18dc-osd--block--0c11b362--ac03--5009--be6f--11a9ef5f18dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lAj6H2-GbpL-tiIF-gWFe-eMOf-QtJU-zCspVm', 'scsi-0QEMU_QEMU_HARDDISK_22af1316-5bc1-4af9-ac7a-65db3b57cabb', 'scsi-SQEMU_QEMU_HARDDISK_22af1316-5bc1-4af9-ac7a-65db3b57cabb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.428823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b396848d--3790--5c5a--8f8a--1e47b4270a43-osd--block--b396848d--3790--5c5a--8f8a--1e47b4270a43'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SSRNip-mMzR-tZor-FjCQ-hyeQ-c1ou-teOgYN', 'scsi-0QEMU_QEMU_HARDDISK_f2e9dc75-50de-4afc-bb89-e69d1400c858', 'scsi-SQEMU_QEMU_HARDDISK_f2e9dc75-50de-4afc-bb89-e69d1400c858'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.428836 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.428856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9dcda133-58d2-4853-8afe-c4a876875c80', 'scsi-SQEMU_QEMU_HARDDISK_9dcda133-58d2-4853-8afe-c4a876875c80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.428873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-25-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.428891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6-osd--block--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6', 'dm-uuid-LVM-9yNJ8algFQCe0Lclf5Jy1KC3jWf3L15em4DweRrWAFPNdfMxHldV7he5T2KUXFML'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--38a85088--e19d--56c7--801b--f45e1c084bd2-osd--block--38a85088--e19d--56c7--801b--f45e1c084bd2', 'dm-uuid-LVM-hIEJ3Y0TZU0RuTYN1UttFgCpmaDl5TJ7jivGPkU5G3lWJgmFJZiQcEWy2AJm3Cbl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.428996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.429038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6-osd--block--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fdDzjd-9T9M-cqTu-GJl1-h4RP-aBJ6-IHudZ4', 'scsi-0QEMU_QEMU_HARDDISK_cc9ae976-88cb-4b21-9449-d8985ff12d4f', 'scsi-SQEMU_QEMU_HARDDISK_cc9ae976-88cb-4b21-9449-d8985ff12d4f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.429061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--38a85088--e19d--56c7--801b--f45e1c084bd2-osd--block--38a85088--e19d--56c7--801b--f45e1c084bd2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YiSio8-0igJ-vSex-UFpF-3jw7-YaBM-7i3yTR', 'scsi-0QEMU_QEMU_HARDDISK_d957e37b-6f48-487c-9682-d56dbc604f5a', 'scsi-SQEMU_QEMU_HARDDISK_d957e37b-6f48-487c-9682-d56dbc604f5a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.429073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36831ba3-00a3-40d1-8c8d-d5688ce5b92e', 'scsi-SQEMU_QEMU_HARDDISK_36831ba3-00a3-40d1-8c8d-d5688ce5b92e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.429084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-24-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.429095 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.429107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3', 'scsi-SQEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part1', 'scsi-SQEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part14', 'scsi-SQEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part15', 'scsi-SQEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part16', 'scsi-SQEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.429287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-24-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.429306 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.429317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429410 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.429446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21', 'scsi-SQEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part1', 'scsi-SQEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part14', 'scsi-SQEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part15', 'scsi-SQEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part16', 'scsi-SQEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.429460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-25-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.429472 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.429483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:17:37.429605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca', 'scsi-SQEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.429631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-25-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:17:37.429643 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.429654 | orchestrator | 2025-07-04 18:17:37.429665 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-04 18:17:37.429676 | orchestrator | Friday 04 July 2025 18:05:14 +0000 (0:00:02.106) 0:00:35.942 *********** 2025-07-04 18:17:37.429694 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36-osd--block--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36', 'dm-uuid-LVM-B3Y3vVt13oq7W12qJO9i0i6uep7VTFvfcGfEXbvVV6V6O7RTne1vNFxTHUmPjFQE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429707 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50c65579--7f86--5010--a824--2221e6b8d3f0-osd--block--50c65579--7f86--5010--a824--2221e6b8d3f0', 'dm-uuid-LVM-ZG3XWcaXjrAvYnnejqBIYJ0ciDWIs1Csg0ixG4tV2ItMKloRzAL7LZn9V6kamgcP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429760 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429785 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c11b362--ac03--5009--be6f--11a9ef5f18dc-osd--block--0c11b362--ac03--5009--be6f--11a9ef5f18dc', 'dm-uuid-LVM-85sUfP606lq7Q3qlcfR1IFsiywW560yhDPu2j2sIi3qGNQFOf68DzrNES5iiIM0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429805 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b396848d--3790--5c5a--8f8a--1e47b4270a43-osd--block--b396848d--3790--5c5a--8f8a--1e47b4270a43', 'dm-uuid-LVM-xMS0sPfwvCdF5iP1LiDfuNYivGcUuqa86TUlwLmuLCuTRsoOe4i32c1w7HKJVWmz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429821 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429837 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429938 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.429988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36-osd--block--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YCSxTc-76yS-14eo-2p4B-V1C4-kRp3-c6rJuf', 'scsi-0QEMU_QEMU_HARDDISK_f1ee158f-8183-4691-b988-cdb0b3746d63', 'scsi-SQEMU_QEMU_HARDDISK_f1ee158f-8183-4691-b988-cdb0b3746d63'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430061 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430075 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--50c65579--7f86--5010--a824--2221e6b8d3f0-osd--block--50c65579--7f86--5010--a824--2221e6b8d3f0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HYJGol-8JGG-VzqY-74tM-pxx0-LLRX-2M1TEb', 'scsi-0QEMU_QEMU_HARDDISK_cc10544f-afe1-4b17-ac35-d479dbd44023', 'scsi-SQEMU_QEMU_HARDDISK_cc10544f-afe1-4b17-ac35-d479dbd44023'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430093 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c678ea0e-f232-4db4-9458-94e4077f665f', 'scsi-SQEMU_QEMU_HARDDISK_c678ea0e-f232-4db4-9458-94e4077f665f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430110 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430121 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-25-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430136 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430147 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430209 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.430222 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6-osd--block--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6', 'dm-uuid-LVM-9yNJ8algFQCe0Lclf5Jy1KC3jWf3L15em4DweRrWAFPNdfMxHldV7he5T2KUXFML'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430239 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--38a85088--e19d--56c7--801b--f45e1c084bd2-osd--block--38a85088--e19d--56c7--801b--f45e1c084bd2', 'dm-uuid-LVM-hIEJ3Y0TZU0RuTYN1UttFgCpmaDl5TJ7jivGPkU5G3lWJgmFJZiQcEWy2AJm3Cbl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430272 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part1', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part14', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part15', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part16', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430290 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430300 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430315 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430331 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0c11b362--ac03--5009--be6f--11a9ef5f18dc-osd--block--0c11b362--ac03--5009--be6f--11a9ef5f18dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lAj6H2-GbpL-tiIF-gWFe-eMOf-QtJU-zCspVm', 'scsi-0QEMU_QEMU_HARDDISK_22af1316-5bc1-4af9-ac7a-65db3b57cabb', 'scsi-SQEMU_QEMU_HARDDISK_22af1316-5bc1-4af9-ac7a-65db3b57cabb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430341 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430351 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430369 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b396848d--3790--5c5a--8f8a--1e47b4270a43-osd--block--b396848d--3790--5c5a--8f8a--1e47b4270a43'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SSRNip-mMzR-tZor-FjCQ-hyeQ-c1ou-teOgYN', 'scsi-0QEMU_QEMU_HARDDISK_f2e9dc75-50de-4afc-bb89-e69d1400c858', 'scsi-SQEMU_QEMU_HARDDISK_f2e9dc75-50de-4afc-bb89-e69d1400c858'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430379 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430404 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430422 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430432 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9dcda133-58d2-4853-8afe-c4a876875c80', 'scsi-SQEMU_QEMU_HARDDISK_9dcda133-58d2-4853-8afe-c4a876875c80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430442 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430459 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430469 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430487 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-25-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430501 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430511 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.430522 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430532 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430571 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430588 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430597 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430610 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6-osd--block--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fdDzjd-9T9M-cqTu-GJl1-h4RP-aBJ6-IHudZ4', 'scsi-0QEMU_QEMU_HARDDISK_cc9ae976-88cb-4b21-9449-d8985ff12d4f', 'scsi-SQEMU_QEMU_HARDDISK_cc9ae976-88cb-4b21-9449-d8985ff12d4f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430626 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3', 'scsi-SQEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part1', 'scsi-SQEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part14', 'scsi-SQEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part15', 'scsi-SQEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part16', 'scsi-SQEMU_QEMU_HARDDISK_acc40fa0-2709-4f7e-bb91-7c7e8e422ea3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430640 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-24-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430653 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--38a85088--e19d--56c7--801b--f45e1c084bd2-osd--block--38a85088--e19d--56c7--801b--f45e1c084bd2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YiSio8-0igJ-vSex-UFpF-3jw7-YaBM-7i3yTR', 'scsi-0QEMU_QEMU_HARDDISK_d957e37b-6f48-487c-9682-d56dbc604f5a', 'scsi-SQEMU_QEMU_HARDDISK_d957e37b-6f48-487c-9682-d56dbc604f5a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430662 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430671 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36831ba3-00a3-40d1-8c8d-d5688ce5b92e', 'scsi-SQEMU_QEMU_HARDDISK_36831ba3-00a3-40d1-8c8d-d5688ce5b92e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430685 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430697 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430706 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430719 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430728 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-24-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430736 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430751 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430759 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430772 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21', 'scsi-SQEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part1', 'scsi-SQEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part14', 'scsi-SQEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part15', 'scsi-SQEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part16', 'scsi-SQEMU_QEMU_HARDDISK_eaff6497-f877-4154-a511-2c6d9abffd21-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430786 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-25-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430822 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.430841 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.430849 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.430864 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430877 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430904 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430912 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430921 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430929 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430943 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430952 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430965 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca', 'scsi-SQEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb812322-6028-4c88-9c9a-0c04dc1dfbca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430981 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-25-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:17:37.430989 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.430997 | orchestrator | 2025-07-04 18:17:37.431005 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-04 18:17:37.431013 | orchestrator | Friday 04 July 2025 18:05:16 +0000 (0:00:01.460) 0:00:37.403 *********** 2025-07-04 18:17:37.431026 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.431035 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.431042 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.431050 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.431058 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.431065 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.431073 | orchestrator | 2025-07-04 18:17:37.431081 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-04 18:17:37.431089 | orchestrator | Friday 04 July 2025 18:05:18 +0000 (0:00:01.855) 0:00:39.258 *********** 2025-07-04 18:17:37.431097 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.431110 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.431118 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.431125 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.431133 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.431140 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.431148 | orchestrator | 2025-07-04 18:17:37.431171 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-04 18:17:37.431180 | orchestrator | Friday 04 July 2025 18:05:18 +0000 (0:00:00.694) 0:00:39.953 *********** 2025-07-04 18:17:37.431188 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.431196 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.431204 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.431212 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.431229 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.431241 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.431249 | orchestrator | 2025-07-04 18:17:37.431257 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-04 18:17:37.431265 | orchestrator | Friday 04 July 2025 18:05:19 +0000 (0:00:00.911) 0:00:40.865 *********** 2025-07-04 18:17:37.431273 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.431281 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.431288 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.431296 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.431304 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.431311 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.431319 | orchestrator | 2025-07-04 18:17:37.431327 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-04 18:17:37.431335 | orchestrator | Friday 04 July 2025 18:05:20 +0000 (0:00:01.118) 0:00:41.983 *********** 2025-07-04 18:17:37.431343 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.431350 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.431359 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.431366 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.431374 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.431382 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.431390 | orchestrator | 2025-07-04 18:17:37.431397 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-04 18:17:37.431405 | orchestrator | Friday 04 July 2025 18:05:22 +0000 (0:00:01.108) 0:00:43.092 *********** 2025-07-04 18:17:37.431413 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.431421 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.431429 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.431436 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.431444 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.431452 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.431459 | orchestrator | 2025-07-04 18:17:37.431467 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-04 18:17:37.431484 | orchestrator | Friday 04 July 2025 18:05:22 +0000 (0:00:00.724) 0:00:43.817 *********** 2025-07-04 18:17:37.431492 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-04 18:17:37.431500 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-04 18:17:37.431508 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-04 18:17:37.431516 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-04 18:17:37.431523 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-04 18:17:37.431531 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-04 18:17:37.431539 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-04 18:17:37.431547 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-04 18:17:37.431554 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-04 18:17:37.431562 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-04 18:17:37.431570 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-07-04 18:17:37.431585 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-04 18:17:37.431592 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-07-04 18:17:37.431600 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-04 18:17:37.431608 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-07-04 18:17:37.431615 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-07-04 18:17:37.431623 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-07-04 18:17:37.431631 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-07-04 18:17:37.431639 | orchestrator | 2025-07-04 18:17:37.431647 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-04 18:17:37.431655 | orchestrator | Friday 04 July 2025 18:05:25 +0000 (0:00:02.313) 0:00:46.130 *********** 2025-07-04 18:17:37.431662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-04 18:17:37.431670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-04 18:17:37.431678 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-04 18:17:37.431686 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.431694 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-04 18:17:37.431702 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-04 18:17:37.431710 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-04 18:17:37.431717 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-04 18:17:37.431725 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-04 18:17:37.431733 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.431746 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-04 18:17:37.431754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-04 18:17:37.431762 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-04 18:17:37.431770 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-04 18:17:37.431778 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.431786 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-04 18:17:37.431794 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-04 18:17:37.431801 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-04 18:17:37.431809 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.431817 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.431825 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-04 18:17:37.431832 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-04 18:17:37.431840 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-04 18:17:37.431848 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.431855 | orchestrator | 2025-07-04 18:17:37.431863 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-04 18:17:37.431871 | orchestrator | Friday 04 July 2025 18:05:26 +0000 (0:00:01.028) 0:00:47.158 *********** 2025-07-04 18:17:37.431885 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.431893 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.431901 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.431910 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.431918 | orchestrator | 2025-07-04 18:17:37.431926 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-04 18:17:37.431934 | orchestrator | Friday 04 July 2025 18:05:27 +0000 (0:00:01.144) 0:00:48.303 *********** 2025-07-04 18:17:37.431941 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.431949 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.431957 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.431965 | orchestrator | 2025-07-04 18:17:37.431979 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-04 18:17:37.431987 | orchestrator | Friday 04 July 2025 18:05:27 +0000 (0:00:00.443) 0:00:48.746 *********** 2025-07-04 18:17:37.431994 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.432002 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.432010 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.432017 | orchestrator | 2025-07-04 18:17:37.432025 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-04 18:17:37.432033 | orchestrator | Friday 04 July 2025 18:05:28 +0000 (0:00:00.638) 0:00:49.385 *********** 2025-07-04 18:17:37.432041 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.432048 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.432056 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.432064 | orchestrator | 2025-07-04 18:17:37.432071 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-04 18:17:37.432079 | orchestrator | Friday 04 July 2025 18:05:28 +0000 (0:00:00.457) 0:00:49.843 *********** 2025-07-04 18:17:37.432087 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.432095 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.432102 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.432110 | orchestrator | 2025-07-04 18:17:37.432118 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-04 18:17:37.432126 | orchestrator | Friday 04 July 2025 18:05:29 +0000 (0:00:00.629) 0:00:50.473 *********** 2025-07-04 18:17:37.432134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.432141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.432149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.432173 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.432181 | orchestrator | 2025-07-04 18:17:37.432189 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-04 18:17:37.432197 | orchestrator | Friday 04 July 2025 18:05:29 +0000 (0:00:00.352) 0:00:50.825 *********** 2025-07-04 18:17:37.432205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.432213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.432221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.432229 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.432236 | orchestrator | 2025-07-04 18:17:37.432244 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-04 18:17:37.432252 | orchestrator | Friday 04 July 2025 18:05:30 +0000 (0:00:00.365) 0:00:51.190 *********** 2025-07-04 18:17:37.432260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.432268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.432276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.432283 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.432291 | orchestrator | 2025-07-04 18:17:37.432299 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-04 18:17:37.432307 | orchestrator | Friday 04 July 2025 18:05:30 +0000 (0:00:00.496) 0:00:51.687 *********** 2025-07-04 18:17:37.432314 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.432322 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.432330 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.432337 | orchestrator | 2025-07-04 18:17:37.432345 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-04 18:17:37.432353 | orchestrator | Friday 04 July 2025 18:05:31 +0000 (0:00:00.513) 0:00:52.200 *********** 2025-07-04 18:17:37.432361 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-04 18:17:37.432368 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-04 18:17:37.432376 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-04 18:17:37.432384 | orchestrator | 2025-07-04 18:17:37.432396 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-04 18:17:37.432410 | orchestrator | Friday 04 July 2025 18:05:31 +0000 (0:00:00.660) 0:00:52.860 *********** 2025-07-04 18:17:37.432418 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-04 18:17:37.432426 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-04 18:17:37.432433 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-04 18:17:37.432441 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-04 18:17:37.432449 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-04 18:17:37.432456 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-04 18:17:37.432464 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-04 18:17:37.432472 | orchestrator | 2025-07-04 18:17:37.432479 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-04 18:17:37.432487 | orchestrator | Friday 04 July 2025 18:05:32 +0000 (0:00:00.758) 0:00:53.619 *********** 2025-07-04 18:17:37.432509 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-04 18:17:37.432517 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-04 18:17:37.432525 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-04 18:17:37.432533 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-04 18:17:37.432541 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-04 18:17:37.432549 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-04 18:17:37.432557 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-04 18:17:37.432573 | orchestrator | 2025-07-04 18:17:37.432582 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-04 18:17:37.432589 | orchestrator | Friday 04 July 2025 18:05:34 +0000 (0:00:01.904) 0:00:55.524 *********** 2025-07-04 18:17:37.432597 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.432605 | orchestrator | 2025-07-04 18:17:37.432613 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-04 18:17:37.432621 | orchestrator | Friday 04 July 2025 18:05:35 +0000 (0:00:01.022) 0:00:56.546 *********** 2025-07-04 18:17:37.432629 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.432637 | orchestrator | 2025-07-04 18:17:37.432653 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-04 18:17:37.432661 | orchestrator | Friday 04 July 2025 18:05:36 +0000 (0:00:01.141) 0:00:57.687 *********** 2025-07-04 18:17:37.432669 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.432677 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.432684 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.432692 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.432700 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.432708 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.432715 | orchestrator | 2025-07-04 18:17:37.432723 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-04 18:17:37.432730 | orchestrator | Friday 04 July 2025 18:05:37 +0000 (0:00:01.318) 0:00:59.006 *********** 2025-07-04 18:17:37.432746 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.432754 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.432762 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.432770 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.432778 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.432795 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.432803 | orchestrator | 2025-07-04 18:17:37.432810 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-04 18:17:37.432818 | orchestrator | Friday 04 July 2025 18:05:39 +0000 (0:00:01.148) 0:01:00.154 *********** 2025-07-04 18:17:37.432826 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.432834 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.432841 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.432849 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.432857 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.432865 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.432872 | orchestrator | 2025-07-04 18:17:37.432880 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-04 18:17:37.432888 | orchestrator | Friday 04 July 2025 18:05:40 +0000 (0:00:01.251) 0:01:01.406 *********** 2025-07-04 18:17:37.432896 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.432903 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.432911 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.432919 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.432927 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.432935 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.432942 | orchestrator | 2025-07-04 18:17:37.432950 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-04 18:17:37.432958 | orchestrator | Friday 04 July 2025 18:05:41 +0000 (0:00:01.568) 0:01:02.974 *********** 2025-07-04 18:17:37.432966 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.432974 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.432985 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.432998 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.433012 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.433026 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.433039 | orchestrator | 2025-07-04 18:17:37.433053 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-04 18:17:37.433072 | orchestrator | Friday 04 July 2025 18:05:44 +0000 (0:00:02.116) 0:01:05.091 *********** 2025-07-04 18:17:37.433084 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.433097 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.433111 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.433126 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.433136 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.433144 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.433152 | orchestrator | 2025-07-04 18:17:37.433210 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-04 18:17:37.433220 | orchestrator | Friday 04 July 2025 18:05:45 +0000 (0:00:00.977) 0:01:06.068 *********** 2025-07-04 18:17:37.433228 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.433236 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.433243 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.433251 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.433258 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.433266 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.433274 | orchestrator | 2025-07-04 18:17:37.433282 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-04 18:17:37.433289 | orchestrator | Friday 04 July 2025 18:05:46 +0000 (0:00:01.004) 0:01:07.073 *********** 2025-07-04 18:17:37.433297 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.433310 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.433318 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.433326 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.433334 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.433341 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.433360 | orchestrator | 2025-07-04 18:17:37.433368 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-04 18:17:37.433376 | orchestrator | Friday 04 July 2025 18:05:47 +0000 (0:00:01.275) 0:01:08.348 *********** 2025-07-04 18:17:37.433393 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.433401 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.433409 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.433416 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.433424 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.433432 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.433439 | orchestrator | 2025-07-04 18:17:37.433447 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-04 18:17:37.433455 | orchestrator | Friday 04 July 2025 18:05:48 +0000 (0:00:01.548) 0:01:09.897 *********** 2025-07-04 18:17:37.433463 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.433471 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.433479 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.433486 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.433494 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.433502 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.433508 | orchestrator | 2025-07-04 18:17:37.433515 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-04 18:17:37.433522 | orchestrator | Friday 04 July 2025 18:05:49 +0000 (0:00:00.889) 0:01:10.787 *********** 2025-07-04 18:17:37.433528 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.433535 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.433542 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.433548 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.433555 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.433569 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.433576 | orchestrator | 2025-07-04 18:17:37.433583 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-04 18:17:37.433590 | orchestrator | Friday 04 July 2025 18:05:50 +0000 (0:00:00.905) 0:01:11.692 *********** 2025-07-04 18:17:37.433596 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.433603 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.433609 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.433616 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.433623 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.433629 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.433636 | orchestrator | 2025-07-04 18:17:37.433643 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-04 18:17:37.433649 | orchestrator | Friday 04 July 2025 18:05:51 +0000 (0:00:00.927) 0:01:12.620 *********** 2025-07-04 18:17:37.433656 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.433662 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.433669 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.433675 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.433682 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.433689 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.433695 | orchestrator | 2025-07-04 18:17:37.433702 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-04 18:17:37.433709 | orchestrator | Friday 04 July 2025 18:05:52 +0000 (0:00:00.712) 0:01:13.333 *********** 2025-07-04 18:17:37.433715 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.433722 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.433728 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.433735 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.433742 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.433748 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.433762 | orchestrator | 2025-07-04 18:17:37.433769 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-04 18:17:37.433776 | orchestrator | Friday 04 July 2025 18:05:52 +0000 (0:00:00.593) 0:01:13.926 *********** 2025-07-04 18:17:37.433782 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.433789 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.433795 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.433802 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.433813 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.433820 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.433826 | orchestrator | 2025-07-04 18:17:37.433833 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-04 18:17:37.433840 | orchestrator | Friday 04 July 2025 18:05:53 +0000 (0:00:00.696) 0:01:14.623 *********** 2025-07-04 18:17:37.433846 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.433853 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.433859 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.433866 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.433873 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.433879 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.433886 | orchestrator | 2025-07-04 18:17:37.433898 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-04 18:17:37.433905 | orchestrator | Friday 04 July 2025 18:05:54 +0000 (0:00:00.571) 0:01:15.195 *********** 2025-07-04 18:17:37.433912 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.433918 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.433925 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.433931 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.433938 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.433944 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.433951 | orchestrator | 2025-07-04 18:17:37.433957 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-04 18:17:37.433964 | orchestrator | Friday 04 July 2025 18:05:54 +0000 (0:00:00.695) 0:01:15.890 *********** 2025-07-04 18:17:37.433970 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.433977 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.433983 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.433990 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.433996 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.434003 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.434009 | orchestrator | 2025-07-04 18:17:37.434230 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-04 18:17:37.434253 | orchestrator | Friday 04 July 2025 18:05:55 +0000 (0:00:00.635) 0:01:16.526 *********** 2025-07-04 18:17:37.434264 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.434282 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.434294 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.434304 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.434316 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.434324 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.434331 | orchestrator | 2025-07-04 18:17:37.434338 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-07-04 18:17:37.434345 | orchestrator | Friday 04 July 2025 18:05:56 +0000 (0:00:01.293) 0:01:17.819 *********** 2025-07-04 18:17:37.434352 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.434358 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.434365 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.434371 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.434378 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.434385 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.434391 | orchestrator | 2025-07-04 18:17:37.434398 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-07-04 18:17:37.434404 | orchestrator | Friday 04 July 2025 18:05:58 +0000 (0:00:01.695) 0:01:19.514 *********** 2025-07-04 18:17:37.434411 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.434418 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.434424 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.434431 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.434437 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.434444 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.434450 | orchestrator | 2025-07-04 18:17:37.434457 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-07-04 18:17:37.434482 | orchestrator | Friday 04 July 2025 18:06:00 +0000 (0:00:02.279) 0:01:21.794 *********** 2025-07-04 18:17:37.434490 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.434497 | orchestrator | 2025-07-04 18:17:37.434504 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-07-04 18:17:37.434511 | orchestrator | Friday 04 July 2025 18:06:02 +0000 (0:00:01.344) 0:01:23.139 *********** 2025-07-04 18:17:37.434517 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.434524 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.434530 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.434536 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.434543 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.434549 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.434556 | orchestrator | 2025-07-04 18:17:37.434570 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-07-04 18:17:37.434577 | orchestrator | Friday 04 July 2025 18:06:02 +0000 (0:00:00.652) 0:01:23.791 *********** 2025-07-04 18:17:37.434584 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.434590 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.434597 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.434603 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.434610 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.434624 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.434631 | orchestrator | 2025-07-04 18:17:37.434645 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-07-04 18:17:37.434651 | orchestrator | Friday 04 July 2025 18:06:03 +0000 (0:00:00.500) 0:01:24.292 *********** 2025-07-04 18:17:37.434658 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-04 18:17:37.434664 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-04 18:17:37.434671 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-04 18:17:37.434677 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-04 18:17:37.434684 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-04 18:17:37.434690 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-04 18:17:37.434697 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-04 18:17:37.434703 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-04 18:17:37.434710 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-04 18:17:37.434716 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-04 18:17:37.434723 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-04 18:17:37.434764 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-04 18:17:37.434772 | orchestrator | 2025-07-04 18:17:37.434779 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-07-04 18:17:37.434785 | orchestrator | Friday 04 July 2025 18:06:04 +0000 (0:00:01.375) 0:01:25.668 *********** 2025-07-04 18:17:37.434792 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.434799 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.434806 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.434821 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.434829 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.434836 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.434844 | orchestrator | 2025-07-04 18:17:37.434851 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-07-04 18:17:37.434864 | orchestrator | Friday 04 July 2025 18:06:05 +0000 (0:00:00.838) 0:01:26.506 *********** 2025-07-04 18:17:37.434871 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.434879 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.434894 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.434902 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.434910 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.434917 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.434932 | orchestrator | 2025-07-04 18:17:37.434943 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-07-04 18:17:37.434951 | orchestrator | Friday 04 July 2025 18:06:06 +0000 (0:00:00.702) 0:01:27.209 *********** 2025-07-04 18:17:37.434959 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.434966 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.434973 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.434981 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.434988 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.434995 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.435003 | orchestrator | 2025-07-04 18:17:37.435011 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-07-04 18:17:37.435018 | orchestrator | Friday 04 July 2025 18:06:06 +0000 (0:00:00.529) 0:01:27.739 *********** 2025-07-04 18:17:37.435026 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.435033 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.435048 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.435056 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.435063 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.435071 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.435078 | orchestrator | 2025-07-04 18:17:37.435086 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-07-04 18:17:37.435092 | orchestrator | Friday 04 July 2025 18:06:07 +0000 (0:00:00.652) 0:01:28.391 *********** 2025-07-04 18:17:37.435106 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.435113 | orchestrator | 2025-07-04 18:17:37.435120 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-07-04 18:17:37.435126 | orchestrator | Friday 04 July 2025 18:06:08 +0000 (0:00:01.030) 0:01:29.422 *********** 2025-07-04 18:17:37.435133 | orchestrator | 2025-07-04 18:17:37.435139 | orchestrator | STILL ALIVE [task 'ceph-container-common : Pulling Ceph container image' is running] *** 2025-07-04 18:17:37.435146 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.435152 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.435175 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.435182 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.435188 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.435195 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.435201 | orchestrator | 2025-07-04 18:17:37.435208 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-07-04 18:17:37.435215 | orchestrator | Friday 04 July 2025 18:08:45 +0000 (0:02:36.900) 0:04:06.323 *********** 2025-07-04 18:17:37.435222 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-04 18:17:37.435228 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-04 18:17:37.435235 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-04 18:17:37.435241 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.435248 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-04 18:17:37.435255 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-04 18:17:37.435261 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-04 18:17:37.435268 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.435279 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-04 18:17:37.435286 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-04 18:17:37.435292 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-04 18:17:37.435299 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.435305 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-04 18:17:37.435312 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-04 18:17:37.435318 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-04 18:17:37.435325 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.435332 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-04 18:17:37.435338 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-04 18:17:37.435345 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-04 18:17:37.435351 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.435380 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-04 18:17:37.435388 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-04 18:17:37.435395 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-04 18:17:37.435401 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.435408 | orchestrator | 2025-07-04 18:17:37.435414 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-07-04 18:17:37.435421 | orchestrator | Friday 04 July 2025 18:08:46 +0000 (0:00:00.938) 0:04:07.261 *********** 2025-07-04 18:17:37.435427 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.435434 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.435440 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.435446 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.435453 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.435459 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.435466 | orchestrator | 2025-07-04 18:17:37.435472 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-07-04 18:17:37.435479 | orchestrator | Friday 04 July 2025 18:08:46 +0000 (0:00:00.619) 0:04:07.880 *********** 2025-07-04 18:17:37.435494 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.435501 | orchestrator | 2025-07-04 18:17:37.435507 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-07-04 18:17:37.435517 | orchestrator | Friday 04 July 2025 18:08:46 +0000 (0:00:00.133) 0:04:08.014 *********** 2025-07-04 18:17:37.435524 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.435531 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.435537 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.435544 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.435550 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.435556 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.435563 | orchestrator | 2025-07-04 18:17:37.435569 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-07-04 18:17:37.435576 | orchestrator | Friday 04 July 2025 18:08:47 +0000 (0:00:00.702) 0:04:08.717 *********** 2025-07-04 18:17:37.435583 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.435589 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.435596 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.435602 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.435608 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.435615 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.435621 | orchestrator | 2025-07-04 18:17:37.435628 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-07-04 18:17:37.435635 | orchestrator | Friday 04 July 2025 18:08:48 +0000 (0:00:00.631) 0:04:09.348 *********** 2025-07-04 18:17:37.435645 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.435652 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.435659 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.435665 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.435671 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.435678 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.435684 | orchestrator | 2025-07-04 18:17:37.435691 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-07-04 18:17:37.435697 | orchestrator | Friday 04 July 2025 18:08:49 +0000 (0:00:00.714) 0:04:10.062 *********** 2025-07-04 18:17:37.435704 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.435711 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.435717 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.435723 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.435730 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.435737 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.435743 | orchestrator | 2025-07-04 18:17:37.435749 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-07-04 18:17:37.435756 | orchestrator | Friday 04 July 2025 18:08:51 +0000 (0:00:02.107) 0:04:12.170 *********** 2025-07-04 18:17:37.435763 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.435769 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.435776 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.435782 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.435788 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.435795 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.435801 | orchestrator | 2025-07-04 18:17:37.435808 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-07-04 18:17:37.435814 | orchestrator | Friday 04 July 2025 18:08:51 +0000 (0:00:00.737) 0:04:12.908 *********** 2025-07-04 18:17:37.435821 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.435829 | orchestrator | 2025-07-04 18:17:37.435835 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-07-04 18:17:37.435842 | orchestrator | Friday 04 July 2025 18:08:52 +0000 (0:00:01.053) 0:04:13.962 *********** 2025-07-04 18:17:37.435849 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.435855 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.435862 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.435868 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.435875 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.435881 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.435888 | orchestrator | 2025-07-04 18:17:37.435894 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-07-04 18:17:37.435901 | orchestrator | Friday 04 July 2025 18:08:53 +0000 (0:00:00.663) 0:04:14.626 *********** 2025-07-04 18:17:37.435907 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.435914 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.435920 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.435927 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.435934 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.435940 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.435947 | orchestrator | 2025-07-04 18:17:37.435953 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-07-04 18:17:37.435960 | orchestrator | Friday 04 July 2025 18:08:54 +0000 (0:00:00.914) 0:04:15.540 *********** 2025-07-04 18:17:37.435966 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.435973 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.435979 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.436007 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.436014 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.436021 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.436027 | orchestrator | 2025-07-04 18:17:37.436043 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-07-04 18:17:37.436049 | orchestrator | Friday 04 July 2025 18:08:55 +0000 (0:00:00.742) 0:04:16.282 *********** 2025-07-04 18:17:37.436056 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.436063 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.436069 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.436075 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.436082 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.436088 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.436095 | orchestrator | 2025-07-04 18:17:37.436102 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-07-04 18:17:37.436108 | orchestrator | Friday 04 July 2025 18:08:56 +0000 (0:00:01.007) 0:04:17.290 *********** 2025-07-04 18:17:37.436114 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.436121 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.436128 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.436134 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.436140 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.436147 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.436153 | orchestrator | 2025-07-04 18:17:37.436204 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-07-04 18:17:37.436212 | orchestrator | Friday 04 July 2025 18:08:57 +0000 (0:00:00.816) 0:04:18.107 *********** 2025-07-04 18:17:37.436218 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.436225 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.436232 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.436238 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.436245 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.436251 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.436258 | orchestrator | 2025-07-04 18:17:37.436275 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-07-04 18:17:37.436282 | orchestrator | Friday 04 July 2025 18:08:58 +0000 (0:00:01.042) 0:04:19.149 *********** 2025-07-04 18:17:37.436289 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.436303 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.436310 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.436316 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.436323 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.436329 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.436336 | orchestrator | 2025-07-04 18:17:37.436342 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-07-04 18:17:37.436349 | orchestrator | Friday 04 July 2025 18:08:58 +0000 (0:00:00.643) 0:04:19.793 *********** 2025-07-04 18:17:37.436355 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.436362 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.436369 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.436375 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.436382 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.436388 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.436395 | orchestrator | 2025-07-04 18:17:37.436401 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-07-04 18:17:37.436408 | orchestrator | Friday 04 July 2025 18:08:59 +0000 (0:00:01.045) 0:04:20.838 *********** 2025-07-04 18:17:37.436414 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.436421 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.436427 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.436434 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.436440 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.436447 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.436453 | orchestrator | 2025-07-04 18:17:37.436460 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-07-04 18:17:37.436467 | orchestrator | Friday 04 July 2025 18:09:01 +0000 (0:00:01.323) 0:04:22.162 *********** 2025-07-04 18:17:37.436478 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.436485 | orchestrator | 2025-07-04 18:17:37.436491 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-07-04 18:17:37.436498 | orchestrator | Friday 04 July 2025 18:09:02 +0000 (0:00:01.467) 0:04:23.629 *********** 2025-07-04 18:17:37.436505 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-07-04 18:17:37.436512 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-07-04 18:17:37.436518 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-07-04 18:17:37.436525 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-07-04 18:17:37.436532 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-07-04 18:17:37.436538 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-07-04 18:17:37.436544 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-07-04 18:17:37.436551 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-07-04 18:17:37.436558 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-07-04 18:17:37.436564 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-07-04 18:17:37.436570 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-07-04 18:17:37.436577 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-07-04 18:17:37.436584 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-07-04 18:17:37.436590 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-07-04 18:17:37.436597 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-07-04 18:17:37.436603 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-07-04 18:17:37.436610 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-07-04 18:17:37.436616 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-07-04 18:17:37.436623 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-07-04 18:17:37.436662 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-07-04 18:17:37.436670 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-07-04 18:17:37.436677 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-07-04 18:17:37.436683 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-07-04 18:17:37.436690 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-07-04 18:17:37.436696 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-07-04 18:17:37.436703 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-07-04 18:17:37.436709 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-07-04 18:17:37.436716 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-07-04 18:17:37.436722 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-07-04 18:17:37.436729 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-07-04 18:17:37.436734 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-07-04 18:17:37.436740 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-07-04 18:17:37.436747 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-07-04 18:17:37.436756 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-07-04 18:17:37.436762 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-07-04 18:17:37.436768 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-07-04 18:17:37.436775 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-07-04 18:17:37.436780 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-07-04 18:17:37.436787 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-07-04 18:17:37.436797 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-07-04 18:17:37.436803 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-07-04 18:17:37.436809 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-07-04 18:17:37.436815 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-07-04 18:17:37.436821 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-07-04 18:17:37.436827 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-04 18:17:37.436833 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-07-04 18:17:37.436839 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-07-04 18:17:37.436845 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-04 18:17:37.436851 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-04 18:17:37.436857 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-07-04 18:17:37.436863 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-04 18:17:37.436869 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-07-04 18:17:37.436875 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-04 18:17:37.436881 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-04 18:17:37.436887 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-04 18:17:37.436893 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-04 18:17:37.436899 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-04 18:17:37.436905 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-04 18:17:37.436912 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-04 18:17:37.436918 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-04 18:17:37.436924 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-04 18:17:37.436930 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-04 18:17:37.436936 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-04 18:17:37.436942 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-04 18:17:37.436948 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-04 18:17:37.436954 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-04 18:17:37.436960 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-04 18:17:37.436966 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-04 18:17:37.436972 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-04 18:17:37.436978 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-04 18:17:37.436984 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-04 18:17:37.436990 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-04 18:17:37.436996 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-04 18:17:37.437002 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-04 18:17:37.437008 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-04 18:17:37.437014 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-07-04 18:17:37.437020 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-04 18:17:37.437044 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-04 18:17:37.437051 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-04 18:17:37.437062 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-04 18:17:37.437068 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-04 18:17:37.437074 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-07-04 18:17:37.437080 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-04 18:17:37.437086 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-07-04 18:17:37.437092 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-04 18:17:37.437098 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-07-04 18:17:37.437104 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-04 18:17:37.437110 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-07-04 18:17:37.437116 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-07-04 18:17:37.437122 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-07-04 18:17:37.437131 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-07-04 18:17:37.437137 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-04 18:17:37.437143 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-07-04 18:17:37.437149 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-07-04 18:17:37.437176 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-07-04 18:17:37.437182 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-07-04 18:17:37.437188 | orchestrator | 2025-07-04 18:17:37.437194 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-07-04 18:17:37.437200 | orchestrator | Friday 04 July 2025 18:09:09 +0000 (0:00:07.057) 0:04:30.687 *********** 2025-07-04 18:17:37.437207 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.437213 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.437219 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.437225 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.437231 | orchestrator | 2025-07-04 18:17:37.437237 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-07-04 18:17:37.437243 | orchestrator | Friday 04 July 2025 18:09:11 +0000 (0:00:01.408) 0:04:32.095 *********** 2025-07-04 18:17:37.437250 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.437256 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.437263 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.437269 | orchestrator | 2025-07-04 18:17:37.437275 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-07-04 18:17:37.437281 | orchestrator | Friday 04 July 2025 18:09:11 +0000 (0:00:00.808) 0:04:32.903 *********** 2025-07-04 18:17:37.437288 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.437294 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.437300 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.437306 | orchestrator | 2025-07-04 18:17:37.437312 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-07-04 18:17:37.437318 | orchestrator | Friday 04 July 2025 18:09:13 +0000 (0:00:01.699) 0:04:34.603 *********** 2025-07-04 18:17:37.437324 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.437330 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.437341 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.437347 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.437354 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.437360 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.437366 | orchestrator | 2025-07-04 18:17:37.437372 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-07-04 18:17:37.437378 | orchestrator | Friday 04 July 2025 18:09:14 +0000 (0:00:00.765) 0:04:35.369 *********** 2025-07-04 18:17:37.437384 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.437390 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.437396 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.437402 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.437408 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.437415 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.437421 | orchestrator | 2025-07-04 18:17:37.437427 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-07-04 18:17:37.437433 | orchestrator | Friday 04 July 2025 18:09:15 +0000 (0:00:01.127) 0:04:36.497 *********** 2025-07-04 18:17:37.437439 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.437445 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.437451 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.437457 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.437463 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.437469 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.437475 | orchestrator | 2025-07-04 18:17:37.437482 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-07-04 18:17:37.437512 | orchestrator | Friday 04 July 2025 18:09:16 +0000 (0:00:00.741) 0:04:37.238 *********** 2025-07-04 18:17:37.437519 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.437526 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.437532 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.437538 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.437544 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.437550 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.437556 | orchestrator | 2025-07-04 18:17:37.437562 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-07-04 18:17:37.437568 | orchestrator | Friday 04 July 2025 18:09:17 +0000 (0:00:00.952) 0:04:38.190 *********** 2025-07-04 18:17:37.437574 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.437580 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.437586 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.437592 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.437598 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.437604 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.437610 | orchestrator | 2025-07-04 18:17:37.437616 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-07-04 18:17:37.437622 | orchestrator | Friday 04 July 2025 18:09:17 +0000 (0:00:00.707) 0:04:38.897 *********** 2025-07-04 18:17:37.437632 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.437638 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.437644 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.437650 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.437656 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.437662 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.437668 | orchestrator | 2025-07-04 18:17:37.437674 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-07-04 18:17:37.437680 | orchestrator | Friday 04 July 2025 18:09:18 +0000 (0:00:00.938) 0:04:39.835 *********** 2025-07-04 18:17:37.437686 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.437692 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.437698 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.437704 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.437714 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.437720 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.437726 | orchestrator | 2025-07-04 18:17:37.437732 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-07-04 18:17:37.437738 | orchestrator | Friday 04 July 2025 18:09:19 +0000 (0:00:00.784) 0:04:40.620 *********** 2025-07-04 18:17:37.437744 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.437750 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.437756 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.437762 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.437768 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.437774 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.437780 | orchestrator | 2025-07-04 18:17:37.437786 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-07-04 18:17:37.437792 | orchestrator | Friday 04 July 2025 18:09:20 +0000 (0:00:01.019) 0:04:41.639 *********** 2025-07-04 18:17:37.437798 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.437804 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.437810 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.437816 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.437822 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.437828 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.437834 | orchestrator | 2025-07-04 18:17:37.437840 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-07-04 18:17:37.437846 | orchestrator | Friday 04 July 2025 18:09:23 +0000 (0:00:03.017) 0:04:44.656 *********** 2025-07-04 18:17:37.437852 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.437859 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.437864 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.437871 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.437877 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.437882 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.437888 | orchestrator | 2025-07-04 18:17:37.437894 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-07-04 18:17:37.437900 | orchestrator | Friday 04 July 2025 18:09:24 +0000 (0:00:01.215) 0:04:45.871 *********** 2025-07-04 18:17:37.437906 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.437912 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.437918 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.437925 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.437931 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.437937 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.437943 | orchestrator | 2025-07-04 18:17:37.437948 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-07-04 18:17:37.437954 | orchestrator | Friday 04 July 2025 18:09:25 +0000 (0:00:00.881) 0:04:46.753 *********** 2025-07-04 18:17:37.437960 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.437966 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.437972 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.437978 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.437984 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.437990 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.437996 | orchestrator | 2025-07-04 18:17:37.438002 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-07-04 18:17:37.438008 | orchestrator | Friday 04 July 2025 18:09:26 +0000 (0:00:00.796) 0:04:47.549 *********** 2025-07-04 18:17:37.438041 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.438054 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.438064 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.438080 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.438122 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.438134 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.438144 | orchestrator | 2025-07-04 18:17:37.438153 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-07-04 18:17:37.438200 | orchestrator | Friday 04 July 2025 18:09:27 +0000 (0:00:00.765) 0:04:48.315 *********** 2025-07-04 18:17:37.438207 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-07-04 18:17:37.438216 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-07-04 18:17:37.438229 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-07-04 18:17:37.438235 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-07-04 18:17:37.438242 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.438248 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-07-04 18:17:37.438255 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-07-04 18:17:37.438261 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.438267 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.438273 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.438279 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.438285 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.438291 | orchestrator | 2025-07-04 18:17:37.438297 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-07-04 18:17:37.438303 | orchestrator | Friday 04 July 2025 18:09:28 +0000 (0:00:00.912) 0:04:49.228 *********** 2025-07-04 18:17:37.438309 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.438315 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.438321 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.438327 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.438333 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.438339 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.438345 | orchestrator | 2025-07-04 18:17:37.438351 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-07-04 18:17:37.438357 | orchestrator | Friday 04 July 2025 18:09:28 +0000 (0:00:00.688) 0:04:49.917 *********** 2025-07-04 18:17:37.438363 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.438370 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.438375 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.438382 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.438393 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.438400 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.438406 | orchestrator | 2025-07-04 18:17:37.438412 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-04 18:17:37.438418 | orchestrator | Friday 04 July 2025 18:09:29 +0000 (0:00:00.703) 0:04:50.620 *********** 2025-07-04 18:17:37.438424 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.438430 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.438436 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.438443 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.438448 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.438454 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.438460 | orchestrator | 2025-07-04 18:17:37.438466 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-04 18:17:37.438473 | orchestrator | Friday 04 July 2025 18:09:30 +0000 (0:00:00.589) 0:04:51.210 *********** 2025-07-04 18:17:37.438479 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.438485 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.438491 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.438497 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.438502 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.438508 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.438513 | orchestrator | 2025-07-04 18:17:37.438518 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-04 18:17:37.438543 | orchestrator | Friday 04 July 2025 18:09:30 +0000 (0:00:00.741) 0:04:51.952 *********** 2025-07-04 18:17:37.438550 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.438555 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.438560 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.438565 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.438571 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.438576 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.438581 | orchestrator | 2025-07-04 18:17:37.438586 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-04 18:17:37.438592 | orchestrator | Friday 04 July 2025 18:09:31 +0000 (0:00:00.676) 0:04:52.629 *********** 2025-07-04 18:17:37.438597 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.438602 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.438607 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.438613 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.438618 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.438623 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.438629 | orchestrator | 2025-07-04 18:17:37.438634 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-04 18:17:37.438639 | orchestrator | Friday 04 July 2025 18:09:32 +0000 (0:00:01.125) 0:04:53.754 *********** 2025-07-04 18:17:37.438645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.438654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.438659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.438665 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.438670 | orchestrator | 2025-07-04 18:17:37.438675 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-04 18:17:37.438680 | orchestrator | Friday 04 July 2025 18:09:33 +0000 (0:00:00.360) 0:04:54.114 *********** 2025-07-04 18:17:37.438685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.438691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.438696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.438701 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.438706 | orchestrator | 2025-07-04 18:17:37.438712 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-04 18:17:37.438721 | orchestrator | Friday 04 July 2025 18:09:33 +0000 (0:00:00.385) 0:04:54.500 *********** 2025-07-04 18:17:37.438726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.438731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.438736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.438742 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.438747 | orchestrator | 2025-07-04 18:17:37.438752 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-04 18:17:37.438758 | orchestrator | Friday 04 July 2025 18:09:33 +0000 (0:00:00.460) 0:04:54.961 *********** 2025-07-04 18:17:37.438763 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.438768 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.438773 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.438779 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.438784 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.438789 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.438794 | orchestrator | 2025-07-04 18:17:37.438799 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-04 18:17:37.438805 | orchestrator | Friday 04 July 2025 18:09:34 +0000 (0:00:01.032) 0:04:55.993 *********** 2025-07-04 18:17:37.438810 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-04 18:17:37.438816 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-04 18:17:37.438821 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-04 18:17:37.438826 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-07-04 18:17:37.438832 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-07-04 18:17:37.438837 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.438842 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.438847 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-07-04 18:17:37.438852 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.438858 | orchestrator | 2025-07-04 18:17:37.438863 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-07-04 18:17:37.438869 | orchestrator | Friday 04 July 2025 18:09:37 +0000 (0:00:02.897) 0:04:58.890 *********** 2025-07-04 18:17:37.438874 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.438879 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.438884 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.438889 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.438895 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.438900 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.438905 | orchestrator | 2025-07-04 18:17:37.438910 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-04 18:17:37.438915 | orchestrator | Friday 04 July 2025 18:09:40 +0000 (0:00:02.948) 0:05:01.839 *********** 2025-07-04 18:17:37.438921 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.438926 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.438931 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.438936 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.438941 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.438947 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.438952 | orchestrator | 2025-07-04 18:17:37.438957 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-04 18:17:37.438962 | orchestrator | Friday 04 July 2025 18:09:41 +0000 (0:00:01.193) 0:05:03.032 *********** 2025-07-04 18:17:37.438967 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.438973 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.438978 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.438983 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.438989 | orchestrator | 2025-07-04 18:17:37.438994 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-04 18:17:37.439005 | orchestrator | Friday 04 July 2025 18:09:43 +0000 (0:00:01.089) 0:05:04.122 *********** 2025-07-04 18:17:37.439026 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.439033 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.439038 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.439043 | orchestrator | 2025-07-04 18:17:37.439049 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-04 18:17:37.439054 | orchestrator | Friday 04 July 2025 18:09:43 +0000 (0:00:00.394) 0:05:04.516 *********** 2025-07-04 18:17:37.439059 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.439065 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.439070 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.439075 | orchestrator | 2025-07-04 18:17:37.439080 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-04 18:17:37.439086 | orchestrator | Friday 04 July 2025 18:09:45 +0000 (0:00:01.649) 0:05:06.166 *********** 2025-07-04 18:17:37.439091 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-04 18:17:37.439096 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-04 18:17:37.439102 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-04 18:17:37.439107 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.439112 | orchestrator | 2025-07-04 18:17:37.439118 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-04 18:17:37.439126 | orchestrator | Friday 04 July 2025 18:09:45 +0000 (0:00:00.669) 0:05:06.836 *********** 2025-07-04 18:17:37.439131 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.439136 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.439142 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.439147 | orchestrator | 2025-07-04 18:17:37.439152 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-04 18:17:37.439174 | orchestrator | Friday 04 July 2025 18:09:46 +0000 (0:00:00.342) 0:05:07.178 *********** 2025-07-04 18:17:37.439179 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.439185 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.439190 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.439196 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.439201 | orchestrator | 2025-07-04 18:17:37.439206 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-04 18:17:37.439212 | orchestrator | Friday 04 July 2025 18:09:47 +0000 (0:00:01.140) 0:05:08.319 *********** 2025-07-04 18:17:37.439217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.439222 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.439228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.439233 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439239 | orchestrator | 2025-07-04 18:17:37.439244 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-04 18:17:37.439249 | orchestrator | Friday 04 July 2025 18:09:47 +0000 (0:00:00.423) 0:05:08.743 *********** 2025-07-04 18:17:37.439255 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439260 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.439265 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.439271 | orchestrator | 2025-07-04 18:17:37.439276 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-04 18:17:37.439281 | orchestrator | Friday 04 July 2025 18:09:48 +0000 (0:00:00.335) 0:05:09.078 *********** 2025-07-04 18:17:37.439287 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439292 | orchestrator | 2025-07-04 18:17:37.439297 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-04 18:17:37.439303 | orchestrator | Friday 04 July 2025 18:09:48 +0000 (0:00:00.225) 0:05:09.304 *********** 2025-07-04 18:17:37.439308 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439314 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.439324 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.439330 | orchestrator | 2025-07-04 18:17:37.439335 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-04 18:17:37.439341 | orchestrator | Friday 04 July 2025 18:09:48 +0000 (0:00:00.310) 0:05:09.615 *********** 2025-07-04 18:17:37.439346 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439351 | orchestrator | 2025-07-04 18:17:37.439356 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-04 18:17:37.439362 | orchestrator | Friday 04 July 2025 18:09:48 +0000 (0:00:00.225) 0:05:09.840 *********** 2025-07-04 18:17:37.439367 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439373 | orchestrator | 2025-07-04 18:17:37.439378 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-04 18:17:37.439383 | orchestrator | Friday 04 July 2025 18:09:49 +0000 (0:00:00.245) 0:05:10.086 *********** 2025-07-04 18:17:37.439389 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439394 | orchestrator | 2025-07-04 18:17:37.439399 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-04 18:17:37.439405 | orchestrator | Friday 04 July 2025 18:09:49 +0000 (0:00:00.381) 0:05:10.467 *********** 2025-07-04 18:17:37.439410 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439415 | orchestrator | 2025-07-04 18:17:37.439421 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-04 18:17:37.439426 | orchestrator | Friday 04 July 2025 18:09:49 +0000 (0:00:00.256) 0:05:10.724 *********** 2025-07-04 18:17:37.439431 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439437 | orchestrator | 2025-07-04 18:17:37.439442 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-04 18:17:37.439448 | orchestrator | Friday 04 July 2025 18:09:49 +0000 (0:00:00.229) 0:05:10.954 *********** 2025-07-04 18:17:37.439453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.439458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.439464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.439469 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439474 | orchestrator | 2025-07-04 18:17:37.439480 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-04 18:17:37.439503 | orchestrator | Friday 04 July 2025 18:09:50 +0000 (0:00:00.502) 0:05:11.456 *********** 2025-07-04 18:17:37.439509 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439515 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.439520 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.439525 | orchestrator | 2025-07-04 18:17:37.439531 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-04 18:17:37.439536 | orchestrator | Friday 04 July 2025 18:09:50 +0000 (0:00:00.276) 0:05:11.733 *********** 2025-07-04 18:17:37.439541 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439547 | orchestrator | 2025-07-04 18:17:37.439552 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-04 18:17:37.439557 | orchestrator | Friday 04 July 2025 18:09:50 +0000 (0:00:00.228) 0:05:11.962 *********** 2025-07-04 18:17:37.439562 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439568 | orchestrator | 2025-07-04 18:17:37.439573 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-04 18:17:37.439578 | orchestrator | Friday 04 July 2025 18:09:51 +0000 (0:00:00.215) 0:05:12.178 *********** 2025-07-04 18:17:37.439584 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.439589 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.439594 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.439604 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.439609 | orchestrator | 2025-07-04 18:17:37.439615 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-04 18:17:37.439625 | orchestrator | Friday 04 July 2025 18:09:51 +0000 (0:00:00.849) 0:05:13.028 *********** 2025-07-04 18:17:37.439630 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.439635 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.439641 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.439646 | orchestrator | 2025-07-04 18:17:37.439651 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-04 18:17:37.439657 | orchestrator | Friday 04 July 2025 18:09:52 +0000 (0:00:00.301) 0:05:13.329 *********** 2025-07-04 18:17:37.439662 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.439667 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.439673 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.439678 | orchestrator | 2025-07-04 18:17:37.439683 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-04 18:17:37.439688 | orchestrator | Friday 04 July 2025 18:09:53 +0000 (0:00:01.147) 0:05:14.476 *********** 2025-07-04 18:17:37.439694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.439699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.439704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.439709 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439715 | orchestrator | 2025-07-04 18:17:37.439720 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-04 18:17:37.439725 | orchestrator | Friday 04 July 2025 18:09:54 +0000 (0:00:00.969) 0:05:15.446 *********** 2025-07-04 18:17:37.439731 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.439736 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.439741 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.439746 | orchestrator | 2025-07-04 18:17:37.439752 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-04 18:17:37.439757 | orchestrator | Friday 04 July 2025 18:09:54 +0000 (0:00:00.314) 0:05:15.761 *********** 2025-07-04 18:17:37.439762 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.439767 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.439773 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.439778 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.439784 | orchestrator | 2025-07-04 18:17:37.439789 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-04 18:17:37.439794 | orchestrator | Friday 04 July 2025 18:09:55 +0000 (0:00:00.990) 0:05:16.751 *********** 2025-07-04 18:17:37.439799 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.439805 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.439810 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.439815 | orchestrator | 2025-07-04 18:17:37.439821 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-04 18:17:37.439826 | orchestrator | Friday 04 July 2025 18:09:56 +0000 (0:00:00.337) 0:05:17.088 *********** 2025-07-04 18:17:37.439831 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.439836 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.439842 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.439847 | orchestrator | 2025-07-04 18:17:37.439852 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-04 18:17:37.439857 | orchestrator | Friday 04 July 2025 18:09:57 +0000 (0:00:01.267) 0:05:18.356 *********** 2025-07-04 18:17:37.439863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.439868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.439873 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.439879 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439884 | orchestrator | 2025-07-04 18:17:37.439889 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-04 18:17:37.439894 | orchestrator | Friday 04 July 2025 18:09:58 +0000 (0:00:00.860) 0:05:19.217 *********** 2025-07-04 18:17:37.439903 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.439909 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.439914 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.439919 | orchestrator | 2025-07-04 18:17:37.439924 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-07-04 18:17:37.439930 | orchestrator | Friday 04 July 2025 18:09:58 +0000 (0:00:00.375) 0:05:19.593 *********** 2025-07-04 18:17:37.439935 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.439940 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.439945 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.439951 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.439956 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.439978 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.439984 | orchestrator | 2025-07-04 18:17:37.439990 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-04 18:17:37.439995 | orchestrator | Friday 04 July 2025 18:09:59 +0000 (0:00:00.920) 0:05:20.513 *********** 2025-07-04 18:17:37.440000 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.440006 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.440011 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.440016 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.440022 | orchestrator | 2025-07-04 18:17:37.440027 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-04 18:17:37.440032 | orchestrator | Friday 04 July 2025 18:10:00 +0000 (0:00:01.056) 0:05:21.570 *********** 2025-07-04 18:17:37.440038 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.440043 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.440048 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.440053 | orchestrator | 2025-07-04 18:17:37.440059 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-04 18:17:37.440064 | orchestrator | Friday 04 July 2025 18:10:00 +0000 (0:00:00.338) 0:05:21.909 *********** 2025-07-04 18:17:37.440069 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.440075 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.440083 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.440089 | orchestrator | 2025-07-04 18:17:37.440094 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-04 18:17:37.440099 | orchestrator | Friday 04 July 2025 18:10:02 +0000 (0:00:01.277) 0:05:23.187 *********** 2025-07-04 18:17:37.440104 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-04 18:17:37.440110 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-04 18:17:37.440115 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-04 18:17:37.440120 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.440126 | orchestrator | 2025-07-04 18:17:37.440131 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-04 18:17:37.440136 | orchestrator | Friday 04 July 2025 18:10:02 +0000 (0:00:00.830) 0:05:24.017 *********** 2025-07-04 18:17:37.440145 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.440154 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.440187 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.440199 | orchestrator | 2025-07-04 18:17:37.440206 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-07-04 18:17:37.440213 | orchestrator | 2025-07-04 18:17:37.440221 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-04 18:17:37.440229 | orchestrator | Friday 04 July 2025 18:10:03 +0000 (0:00:00.829) 0:05:24.846 *********** 2025-07-04 18:17:37.440237 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.440244 | orchestrator | 2025-07-04 18:17:37.440252 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-04 18:17:37.440260 | orchestrator | Friday 04 July 2025 18:10:04 +0000 (0:00:00.530) 0:05:25.377 *********** 2025-07-04 18:17:37.440275 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.440283 | orchestrator | 2025-07-04 18:17:37.440291 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-04 18:17:37.440298 | orchestrator | Friday 04 July 2025 18:10:05 +0000 (0:00:00.825) 0:05:26.202 *********** 2025-07-04 18:17:37.440307 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.440317 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.440322 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.440328 | orchestrator | 2025-07-04 18:17:37.440333 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-04 18:17:37.440338 | orchestrator | Friday 04 July 2025 18:10:06 +0000 (0:00:00.861) 0:05:27.064 *********** 2025-07-04 18:17:37.440344 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.440349 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.440354 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.440360 | orchestrator | 2025-07-04 18:17:37.440365 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-04 18:17:37.440370 | orchestrator | Friday 04 July 2025 18:10:06 +0000 (0:00:00.441) 0:05:27.505 *********** 2025-07-04 18:17:37.440376 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.440381 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.440386 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.440392 | orchestrator | 2025-07-04 18:17:37.440397 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-04 18:17:37.440402 | orchestrator | Friday 04 July 2025 18:10:06 +0000 (0:00:00.324) 0:05:27.830 *********** 2025-07-04 18:17:37.440407 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.440412 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.440418 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.440423 | orchestrator | 2025-07-04 18:17:37.440428 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-04 18:17:37.440434 | orchestrator | Friday 04 July 2025 18:10:07 +0000 (0:00:00.602) 0:05:28.433 *********** 2025-07-04 18:17:37.440439 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.440444 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.440449 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.440455 | orchestrator | 2025-07-04 18:17:37.440460 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-04 18:17:37.440465 | orchestrator | Friday 04 July 2025 18:10:08 +0000 (0:00:00.777) 0:05:29.211 *********** 2025-07-04 18:17:37.440470 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.440476 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.440481 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.440486 | orchestrator | 2025-07-04 18:17:37.440492 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-04 18:17:37.440497 | orchestrator | Friday 04 July 2025 18:10:08 +0000 (0:00:00.340) 0:05:29.552 *********** 2025-07-04 18:17:37.440502 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.440532 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.440538 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.440544 | orchestrator | 2025-07-04 18:17:37.440549 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-04 18:17:37.440554 | orchestrator | Friday 04 July 2025 18:10:08 +0000 (0:00:00.312) 0:05:29.865 *********** 2025-07-04 18:17:37.440560 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.440565 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.440571 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.440576 | orchestrator | 2025-07-04 18:17:37.440581 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-04 18:17:37.440586 | orchestrator | Friday 04 July 2025 18:10:09 +0000 (0:00:01.039) 0:05:30.904 *********** 2025-07-04 18:17:37.440592 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.440602 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.440607 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.440613 | orchestrator | 2025-07-04 18:17:37.440618 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-04 18:17:37.440623 | orchestrator | Friday 04 July 2025 18:10:10 +0000 (0:00:00.725) 0:05:31.630 *********** 2025-07-04 18:17:37.440628 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.440634 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.440639 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.440644 | orchestrator | 2025-07-04 18:17:37.440654 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-04 18:17:37.440659 | orchestrator | Friday 04 July 2025 18:10:10 +0000 (0:00:00.257) 0:05:31.888 *********** 2025-07-04 18:17:37.440664 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.440670 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.440675 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.440680 | orchestrator | 2025-07-04 18:17:37.440685 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-04 18:17:37.440691 | orchestrator | Friday 04 July 2025 18:10:11 +0000 (0:00:00.295) 0:05:32.183 *********** 2025-07-04 18:17:37.440696 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.440701 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.440707 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.440712 | orchestrator | 2025-07-04 18:17:37.440717 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-04 18:17:37.440723 | orchestrator | Friday 04 July 2025 18:10:11 +0000 (0:00:00.505) 0:05:32.688 *********** 2025-07-04 18:17:37.440728 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.440733 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.440738 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.440744 | orchestrator | 2025-07-04 18:17:37.440749 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-04 18:17:37.440754 | orchestrator | Friday 04 July 2025 18:10:11 +0000 (0:00:00.291) 0:05:32.980 *********** 2025-07-04 18:17:37.440759 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.440765 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.440770 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.440775 | orchestrator | 2025-07-04 18:17:37.440780 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-04 18:17:37.440786 | orchestrator | Friday 04 July 2025 18:10:12 +0000 (0:00:00.260) 0:05:33.240 *********** 2025-07-04 18:17:37.440791 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.440796 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.440802 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.440807 | orchestrator | 2025-07-04 18:17:37.440812 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-04 18:17:37.440817 | orchestrator | Friday 04 July 2025 18:10:12 +0000 (0:00:00.310) 0:05:33.551 *********** 2025-07-04 18:17:37.440823 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.440828 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.440833 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.440839 | orchestrator | 2025-07-04 18:17:37.440844 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-04 18:17:37.440849 | orchestrator | Friday 04 July 2025 18:10:12 +0000 (0:00:00.459) 0:05:34.010 *********** 2025-07-04 18:17:37.440855 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.440860 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.440865 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.440871 | orchestrator | 2025-07-04 18:17:37.440876 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-04 18:17:37.440881 | orchestrator | Friday 04 July 2025 18:10:13 +0000 (0:00:00.301) 0:05:34.311 *********** 2025-07-04 18:17:37.440886 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.440892 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.440897 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.440907 | orchestrator | 2025-07-04 18:17:37.440913 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-04 18:17:37.440918 | orchestrator | Friday 04 July 2025 18:10:13 +0000 (0:00:00.325) 0:05:34.637 *********** 2025-07-04 18:17:37.440923 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.440928 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.440934 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.440939 | orchestrator | 2025-07-04 18:17:37.440944 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-07-04 18:17:37.440949 | orchestrator | Friday 04 July 2025 18:10:14 +0000 (0:00:00.678) 0:05:35.316 *********** 2025-07-04 18:17:37.440955 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.440960 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.440965 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.440971 | orchestrator | 2025-07-04 18:17:37.440976 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-07-04 18:17:37.440981 | orchestrator | Friday 04 July 2025 18:10:14 +0000 (0:00:00.320) 0:05:35.636 *********** 2025-07-04 18:17:37.440987 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.440992 | orchestrator | 2025-07-04 18:17:37.440997 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-07-04 18:17:37.441003 | orchestrator | Friday 04 July 2025 18:10:15 +0000 (0:00:00.526) 0:05:36.163 *********** 2025-07-04 18:17:37.441008 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.441013 | orchestrator | 2025-07-04 18:17:37.441035 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-07-04 18:17:37.441041 | orchestrator | Friday 04 July 2025 18:10:15 +0000 (0:00:00.155) 0:05:36.318 *********** 2025-07-04 18:17:37.441047 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-04 18:17:37.441052 | orchestrator | 2025-07-04 18:17:37.441057 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-07-04 18:17:37.441062 | orchestrator | Friday 04 July 2025 18:10:16 +0000 (0:00:01.422) 0:05:37.741 *********** 2025-07-04 18:17:37.441068 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.441073 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.441078 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.441083 | orchestrator | 2025-07-04 18:17:37.441089 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-07-04 18:17:37.441094 | orchestrator | Friday 04 July 2025 18:10:16 +0000 (0:00:00.295) 0:05:38.036 *********** 2025-07-04 18:17:37.441099 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.441104 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.441110 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.441115 | orchestrator | 2025-07-04 18:17:37.441120 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-07-04 18:17:37.441126 | orchestrator | Friday 04 July 2025 18:10:17 +0000 (0:00:00.335) 0:05:38.371 *********** 2025-07-04 18:17:37.441131 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.441136 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.441145 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.441150 | orchestrator | 2025-07-04 18:17:37.441172 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-07-04 18:17:37.441178 | orchestrator | Friday 04 July 2025 18:10:18 +0000 (0:00:01.295) 0:05:39.667 *********** 2025-07-04 18:17:37.441183 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.441188 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.441194 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.441199 | orchestrator | 2025-07-04 18:17:37.441204 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-07-04 18:17:37.441209 | orchestrator | Friday 04 July 2025 18:10:19 +0000 (0:00:00.969) 0:05:40.637 *********** 2025-07-04 18:17:37.441215 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.441220 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.441225 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.441235 | orchestrator | 2025-07-04 18:17:37.441240 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-07-04 18:17:37.441246 | orchestrator | Friday 04 July 2025 18:10:20 +0000 (0:00:00.763) 0:05:41.400 *********** 2025-07-04 18:17:37.441251 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.441256 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.441261 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.441267 | orchestrator | 2025-07-04 18:17:37.441272 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-07-04 18:17:37.441277 | orchestrator | Friday 04 July 2025 18:10:21 +0000 (0:00:00.847) 0:05:42.248 *********** 2025-07-04 18:17:37.441282 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.441288 | orchestrator | 2025-07-04 18:17:37.441293 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-07-04 18:17:37.441298 | orchestrator | Friday 04 July 2025 18:10:22 +0000 (0:00:01.541) 0:05:43.790 *********** 2025-07-04 18:17:37.441304 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.441309 | orchestrator | 2025-07-04 18:17:37.441314 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-07-04 18:17:37.441320 | orchestrator | Friday 04 July 2025 18:10:23 +0000 (0:00:00.829) 0:05:44.619 *********** 2025-07-04 18:17:37.441325 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-04 18:17:37.441330 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:17:37.441336 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:17:37.441341 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-04 18:17:37.441346 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-07-04 18:17:37.441351 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-04 18:17:37.441357 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-04 18:17:37.441362 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-07-04 18:17:37.441367 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-04 18:17:37.441373 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-07-04 18:17:37.441378 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-07-04 18:17:37.441383 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-07-04 18:17:37.441388 | orchestrator | 2025-07-04 18:17:37.441394 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-07-04 18:17:37.441399 | orchestrator | Friday 04 July 2025 18:10:27 +0000 (0:00:03.841) 0:05:48.461 *********** 2025-07-04 18:17:37.441404 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.441409 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.441415 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.441420 | orchestrator | 2025-07-04 18:17:37.441425 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-07-04 18:17:37.441431 | orchestrator | Friday 04 July 2025 18:10:29 +0000 (0:00:01.694) 0:05:50.156 *********** 2025-07-04 18:17:37.441436 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.441441 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.441446 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.441452 | orchestrator | 2025-07-04 18:17:37.441457 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-07-04 18:17:37.441462 | orchestrator | Friday 04 July 2025 18:10:29 +0000 (0:00:00.358) 0:05:50.514 *********** 2025-07-04 18:17:37.441468 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.441473 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.441478 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.441483 | orchestrator | 2025-07-04 18:17:37.441488 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-07-04 18:17:37.441494 | orchestrator | Friday 04 July 2025 18:10:29 +0000 (0:00:00.347) 0:05:50.861 *********** 2025-07-04 18:17:37.441499 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.441526 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.441532 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.441538 | orchestrator | 2025-07-04 18:17:37.441543 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-07-04 18:17:37.441548 | orchestrator | Friday 04 July 2025 18:10:32 +0000 (0:00:02.784) 0:05:53.646 *********** 2025-07-04 18:17:37.441553 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.441559 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.441564 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.441569 | orchestrator | 2025-07-04 18:17:37.441574 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-07-04 18:17:37.441580 | orchestrator | Friday 04 July 2025 18:10:34 +0000 (0:00:01.775) 0:05:55.422 *********** 2025-07-04 18:17:37.441585 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.441590 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.441595 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.441601 | orchestrator | 2025-07-04 18:17:37.441606 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-07-04 18:17:37.441611 | orchestrator | Friday 04 July 2025 18:10:34 +0000 (0:00:00.301) 0:05:55.723 *********** 2025-07-04 18:17:37.441617 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.441622 | orchestrator | 2025-07-04 18:17:37.441631 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-07-04 18:17:37.441636 | orchestrator | Friday 04 July 2025 18:10:35 +0000 (0:00:00.479) 0:05:56.203 *********** 2025-07-04 18:17:37.441641 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.441647 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.441652 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.441657 | orchestrator | 2025-07-04 18:17:37.441662 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-07-04 18:17:37.441668 | orchestrator | Friday 04 July 2025 18:10:35 +0000 (0:00:00.462) 0:05:56.666 *********** 2025-07-04 18:17:37.441673 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.441678 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.441683 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.441689 | orchestrator | 2025-07-04 18:17:37.441694 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-07-04 18:17:37.441699 | orchestrator | Friday 04 July 2025 18:10:35 +0000 (0:00:00.265) 0:05:56.931 *********** 2025-07-04 18:17:37.441704 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.441710 | orchestrator | 2025-07-04 18:17:37.441715 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-07-04 18:17:37.441720 | orchestrator | Friday 04 July 2025 18:10:36 +0000 (0:00:00.479) 0:05:57.411 *********** 2025-07-04 18:17:37.441725 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.441731 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.441736 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.441741 | orchestrator | 2025-07-04 18:17:37.441746 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-07-04 18:17:37.441752 | orchestrator | Friday 04 July 2025 18:10:38 +0000 (0:00:01.802) 0:05:59.213 *********** 2025-07-04 18:17:37.441757 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.441762 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.441767 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.441773 | orchestrator | 2025-07-04 18:17:37.441778 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-07-04 18:17:37.441783 | orchestrator | Friday 04 July 2025 18:10:39 +0000 (0:00:01.178) 0:06:00.392 *********** 2025-07-04 18:17:37.441788 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.441794 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.441799 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.441810 | orchestrator | 2025-07-04 18:17:37.441815 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-07-04 18:17:37.441821 | orchestrator | Friday 04 July 2025 18:10:41 +0000 (0:00:01.880) 0:06:02.272 *********** 2025-07-04 18:17:37.441826 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.441831 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.441837 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.441842 | orchestrator | 2025-07-04 18:17:37.441847 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-07-04 18:17:37.441852 | orchestrator | Friday 04 July 2025 18:10:43 +0000 (0:00:02.135) 0:06:04.407 *********** 2025-07-04 18:17:37.441857 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.441863 | orchestrator | 2025-07-04 18:17:37.441868 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-07-04 18:17:37.441873 | orchestrator | Friday 04 July 2025 18:10:44 +0000 (0:00:00.716) 0:06:05.124 *********** 2025-07-04 18:17:37.441879 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-07-04 18:17:37.441884 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.441889 | orchestrator | 2025-07-04 18:17:37.441894 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-07-04 18:17:37.441900 | orchestrator | Friday 04 July 2025 18:11:05 +0000 (0:00:21.854) 0:06:26.978 *********** 2025-07-04 18:17:37.441905 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.441910 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.441915 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.441921 | orchestrator | 2025-07-04 18:17:37.441926 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-07-04 18:17:37.441931 | orchestrator | Friday 04 July 2025 18:11:15 +0000 (0:00:09.889) 0:06:36.868 *********** 2025-07-04 18:17:37.441936 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.441942 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.441947 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.441952 | orchestrator | 2025-07-04 18:17:37.441957 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-07-04 18:17:37.441979 | orchestrator | Friday 04 July 2025 18:11:16 +0000 (0:00:00.325) 0:06:37.193 *********** 2025-07-04 18:17:37.441986 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__74c11446c3b16f5ebd4ff69152b3adc05cf575de'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-07-04 18:17:37.441994 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__74c11446c3b16f5ebd4ff69152b3adc05cf575de'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-07-04 18:17:37.442004 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__74c11446c3b16f5ebd4ff69152b3adc05cf575de'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-07-04 18:17:37.442011 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__74c11446c3b16f5ebd4ff69152b3adc05cf575de'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-07-04 18:17:37.442041 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__74c11446c3b16f5ebd4ff69152b3adc05cf575de'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-07-04 18:17:37.442052 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__74c11446c3b16f5ebd4ff69152b3adc05cf575de'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__74c11446c3b16f5ebd4ff69152b3adc05cf575de'}])  2025-07-04 18:17:37.442058 | orchestrator | 2025-07-04 18:17:37.442064 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-04 18:17:37.442069 | orchestrator | Friday 04 July 2025 18:11:31 +0000 (0:00:15.481) 0:06:52.675 *********** 2025-07-04 18:17:37.442075 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442080 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.442085 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.442091 | orchestrator | 2025-07-04 18:17:37.442096 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-04 18:17:37.442101 | orchestrator | Friday 04 July 2025 18:11:32 +0000 (0:00:00.382) 0:06:53.058 *********** 2025-07-04 18:17:37.442107 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.442112 | orchestrator | 2025-07-04 18:17:37.442117 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-04 18:17:37.442123 | orchestrator | Friday 04 July 2025 18:11:32 +0000 (0:00:00.824) 0:06:53.882 *********** 2025-07-04 18:17:37.442128 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.442133 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.442138 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.442144 | orchestrator | 2025-07-04 18:17:37.442149 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-04 18:17:37.442154 | orchestrator | Friday 04 July 2025 18:11:33 +0000 (0:00:00.354) 0:06:54.236 *********** 2025-07-04 18:17:37.442193 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442199 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.442204 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.442210 | orchestrator | 2025-07-04 18:17:37.442215 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-04 18:17:37.442220 | orchestrator | Friday 04 July 2025 18:11:33 +0000 (0:00:00.370) 0:06:54.607 *********** 2025-07-04 18:17:37.442225 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-04 18:17:37.442231 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-04 18:17:37.442236 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-04 18:17:37.442241 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442247 | orchestrator | 2025-07-04 18:17:37.442252 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-04 18:17:37.442257 | orchestrator | Friday 04 July 2025 18:11:34 +0000 (0:00:00.968) 0:06:55.576 *********** 2025-07-04 18:17:37.442263 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.442268 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.442294 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.442300 | orchestrator | 2025-07-04 18:17:37.442305 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-07-04 18:17:37.442311 | orchestrator | 2025-07-04 18:17:37.442316 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-04 18:17:37.442321 | orchestrator | Friday 04 July 2025 18:11:35 +0000 (0:00:00.872) 0:06:56.449 *********** 2025-07-04 18:17:37.442327 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.442338 | orchestrator | 2025-07-04 18:17:37.442343 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-04 18:17:37.442349 | orchestrator | Friday 04 July 2025 18:11:36 +0000 (0:00:00.599) 0:06:57.048 *********** 2025-07-04 18:17:37.442354 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.442359 | orchestrator | 2025-07-04 18:17:37.442365 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-04 18:17:37.442370 | orchestrator | Friday 04 July 2025 18:11:36 +0000 (0:00:00.899) 0:06:57.948 *********** 2025-07-04 18:17:37.442375 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.442384 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.442390 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.442395 | orchestrator | 2025-07-04 18:17:37.442400 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-04 18:17:37.442406 | orchestrator | Friday 04 July 2025 18:11:37 +0000 (0:00:00.790) 0:06:58.739 *********** 2025-07-04 18:17:37.442411 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442416 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.442421 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.442427 | orchestrator | 2025-07-04 18:17:37.442432 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-04 18:17:37.442437 | orchestrator | Friday 04 July 2025 18:11:38 +0000 (0:00:00.383) 0:06:59.123 *********** 2025-07-04 18:17:37.442442 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442447 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.442453 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.442458 | orchestrator | 2025-07-04 18:17:37.442463 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-04 18:17:37.442468 | orchestrator | Friday 04 July 2025 18:11:38 +0000 (0:00:00.683) 0:06:59.807 *********** 2025-07-04 18:17:37.442474 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442479 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.442484 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.442489 | orchestrator | 2025-07-04 18:17:37.442495 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-04 18:17:37.442500 | orchestrator | Friday 04 July 2025 18:11:39 +0000 (0:00:00.304) 0:07:00.111 *********** 2025-07-04 18:17:37.442505 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.442510 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.442516 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.442521 | orchestrator | 2025-07-04 18:17:37.442526 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-04 18:17:37.442531 | orchestrator | Friday 04 July 2025 18:11:39 +0000 (0:00:00.763) 0:07:00.874 *********** 2025-07-04 18:17:37.442537 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442542 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.442547 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.442552 | orchestrator | 2025-07-04 18:17:37.442557 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-04 18:17:37.442563 | orchestrator | Friday 04 July 2025 18:11:40 +0000 (0:00:00.315) 0:07:01.190 *********** 2025-07-04 18:17:37.442568 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442573 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.442578 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.442584 | orchestrator | 2025-07-04 18:17:37.442589 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-04 18:17:37.442594 | orchestrator | Friday 04 July 2025 18:11:40 +0000 (0:00:00.584) 0:07:01.774 *********** 2025-07-04 18:17:37.442599 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.442605 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.442610 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.442615 | orchestrator | 2025-07-04 18:17:37.442620 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-04 18:17:37.442630 | orchestrator | Friday 04 July 2025 18:11:41 +0000 (0:00:00.693) 0:07:02.468 *********** 2025-07-04 18:17:37.442635 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.442640 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.442646 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.442651 | orchestrator | 2025-07-04 18:17:37.442656 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-04 18:17:37.442662 | orchestrator | Friday 04 July 2025 18:11:42 +0000 (0:00:00.709) 0:07:03.178 *********** 2025-07-04 18:17:37.442667 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442672 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.442677 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.442683 | orchestrator | 2025-07-04 18:17:37.442688 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-04 18:17:37.442693 | orchestrator | Friday 04 July 2025 18:11:42 +0000 (0:00:00.292) 0:07:03.470 *********** 2025-07-04 18:17:37.442698 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.442704 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.442709 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.442714 | orchestrator | 2025-07-04 18:17:37.442719 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-04 18:17:37.442724 | orchestrator | Friday 04 July 2025 18:11:43 +0000 (0:00:00.579) 0:07:04.049 *********** 2025-07-04 18:17:37.442729 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442733 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.442738 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.442743 | orchestrator | 2025-07-04 18:17:37.442747 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-04 18:17:37.442752 | orchestrator | Friday 04 July 2025 18:11:43 +0000 (0:00:00.282) 0:07:04.331 *********** 2025-07-04 18:17:37.442774 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442780 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.442784 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.442789 | orchestrator | 2025-07-04 18:17:37.442794 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-04 18:17:37.442798 | orchestrator | Friday 04 July 2025 18:11:43 +0000 (0:00:00.311) 0:07:04.643 *********** 2025-07-04 18:17:37.442803 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442808 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.442812 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.442817 | orchestrator | 2025-07-04 18:17:37.442822 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-04 18:17:37.442827 | orchestrator | Friday 04 July 2025 18:11:43 +0000 (0:00:00.317) 0:07:04.961 *********** 2025-07-04 18:17:37.442831 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442836 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.442841 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.442845 | orchestrator | 2025-07-04 18:17:37.442850 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-04 18:17:37.442855 | orchestrator | Friday 04 July 2025 18:11:44 +0000 (0:00:00.571) 0:07:05.532 *********** 2025-07-04 18:17:37.442859 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.442864 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.442871 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.442876 | orchestrator | 2025-07-04 18:17:37.442881 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-04 18:17:37.442886 | orchestrator | Friday 04 July 2025 18:11:44 +0000 (0:00:00.301) 0:07:05.834 *********** 2025-07-04 18:17:37.442890 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.442895 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.442900 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.442904 | orchestrator | 2025-07-04 18:17:37.442909 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-04 18:17:37.442917 | orchestrator | Friday 04 July 2025 18:11:45 +0000 (0:00:00.333) 0:07:06.168 *********** 2025-07-04 18:17:37.442922 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.442927 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.442931 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.442936 | orchestrator | 2025-07-04 18:17:37.442941 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-04 18:17:37.442945 | orchestrator | Friday 04 July 2025 18:11:45 +0000 (0:00:00.338) 0:07:06.506 *********** 2025-07-04 18:17:37.442950 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.442955 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.442959 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.442964 | orchestrator | 2025-07-04 18:17:37.442969 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-07-04 18:17:37.442973 | orchestrator | Friday 04 July 2025 18:11:46 +0000 (0:00:00.826) 0:07:07.333 *********** 2025-07-04 18:17:37.442978 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-04 18:17:37.442983 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-04 18:17:37.442987 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-04 18:17:37.442992 | orchestrator | 2025-07-04 18:17:37.442997 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-07-04 18:17:37.443001 | orchestrator | Friday 04 July 2025 18:11:46 +0000 (0:00:00.690) 0:07:08.023 *********** 2025-07-04 18:17:37.443006 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.443011 | orchestrator | 2025-07-04 18:17:37.443015 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-07-04 18:17:37.443020 | orchestrator | Friday 04 July 2025 18:11:47 +0000 (0:00:00.505) 0:07:08.529 *********** 2025-07-04 18:17:37.443025 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.443029 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.443034 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.443039 | orchestrator | 2025-07-04 18:17:37.443043 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-07-04 18:17:37.443048 | orchestrator | Friday 04 July 2025 18:11:48 +0000 (0:00:00.979) 0:07:09.508 *********** 2025-07-04 18:17:37.443052 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.443057 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.443062 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.443066 | orchestrator | 2025-07-04 18:17:37.443071 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-07-04 18:17:37.443076 | orchestrator | Friday 04 July 2025 18:11:48 +0000 (0:00:00.356) 0:07:09.864 *********** 2025-07-04 18:17:37.443080 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-04 18:17:37.443085 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-04 18:17:37.443090 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-04 18:17:37.443095 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-07-04 18:17:37.443099 | orchestrator | 2025-07-04 18:17:37.443104 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-07-04 18:17:37.443108 | orchestrator | Friday 04 July 2025 18:11:59 +0000 (0:00:10.700) 0:07:20.564 *********** 2025-07-04 18:17:37.443113 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.443118 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.443122 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.443127 | orchestrator | 2025-07-04 18:17:37.443132 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-07-04 18:17:37.443136 | orchestrator | Friday 04 July 2025 18:11:59 +0000 (0:00:00.345) 0:07:20.910 *********** 2025-07-04 18:17:37.443141 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-04 18:17:37.443146 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-04 18:17:37.443150 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-04 18:17:37.443170 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-04 18:17:37.443175 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:17:37.443196 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:17:37.443201 | orchestrator | 2025-07-04 18:17:37.443206 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-07-04 18:17:37.443211 | orchestrator | Friday 04 July 2025 18:12:02 +0000 (0:00:02.488) 0:07:23.399 *********** 2025-07-04 18:17:37.443215 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-04 18:17:37.443220 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-04 18:17:37.443225 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-04 18:17:37.443229 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-04 18:17:37.443234 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-04 18:17:37.443239 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-04 18:17:37.443244 | orchestrator | 2025-07-04 18:17:37.443248 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-07-04 18:17:37.443253 | orchestrator | Friday 04 July 2025 18:12:03 +0000 (0:00:01.499) 0:07:24.898 *********** 2025-07-04 18:17:37.443258 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.443263 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.443267 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.443272 | orchestrator | 2025-07-04 18:17:37.443277 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-07-04 18:17:37.443285 | orchestrator | Friday 04 July 2025 18:12:04 +0000 (0:00:00.670) 0:07:25.569 *********** 2025-07-04 18:17:37.443293 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.443300 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.443308 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.443321 | orchestrator | 2025-07-04 18:17:37.443329 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-07-04 18:17:37.443336 | orchestrator | Friday 04 July 2025 18:12:04 +0000 (0:00:00.310) 0:07:25.879 *********** 2025-07-04 18:17:37.443344 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.443351 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.443358 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.443364 | orchestrator | 2025-07-04 18:17:37.443372 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-07-04 18:17:37.443379 | orchestrator | Friday 04 July 2025 18:12:05 +0000 (0:00:00.315) 0:07:26.194 *********** 2025-07-04 18:17:37.443387 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.443395 | orchestrator | 2025-07-04 18:17:37.443403 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-07-04 18:17:37.443409 | orchestrator | Friday 04 July 2025 18:12:05 +0000 (0:00:00.773) 0:07:26.968 *********** 2025-07-04 18:17:37.443417 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.443425 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.443433 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.443439 | orchestrator | 2025-07-04 18:17:37.443443 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-07-04 18:17:37.443448 | orchestrator | Friday 04 July 2025 18:12:06 +0000 (0:00:00.325) 0:07:27.293 *********** 2025-07-04 18:17:37.443453 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.443457 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.443462 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.443467 | orchestrator | 2025-07-04 18:17:37.443472 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-07-04 18:17:37.443476 | orchestrator | Friday 04 July 2025 18:12:06 +0000 (0:00:00.334) 0:07:27.627 *********** 2025-07-04 18:17:37.443481 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.443486 | orchestrator | 2025-07-04 18:17:37.443496 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-07-04 18:17:37.443501 | orchestrator | Friday 04 July 2025 18:12:07 +0000 (0:00:00.867) 0:07:28.494 *********** 2025-07-04 18:17:37.443506 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.443510 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.443515 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.443520 | orchestrator | 2025-07-04 18:17:37.443524 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-07-04 18:17:37.443529 | orchestrator | Friday 04 July 2025 18:12:08 +0000 (0:00:01.297) 0:07:29.792 *********** 2025-07-04 18:17:37.443534 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.443538 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.443543 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.443548 | orchestrator | 2025-07-04 18:17:37.443552 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-07-04 18:17:37.443557 | orchestrator | Friday 04 July 2025 18:12:09 +0000 (0:00:01.163) 0:07:30.955 *********** 2025-07-04 18:17:37.443562 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.443567 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.443571 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.443576 | orchestrator | 2025-07-04 18:17:37.443581 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-07-04 18:17:37.443585 | orchestrator | Friday 04 July 2025 18:12:12 +0000 (0:00:02.108) 0:07:33.063 *********** 2025-07-04 18:17:37.443590 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.443595 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.443599 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.443604 | orchestrator | 2025-07-04 18:17:37.443609 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-07-04 18:17:37.443613 | orchestrator | Friday 04 July 2025 18:12:14 +0000 (0:00:02.068) 0:07:35.132 *********** 2025-07-04 18:17:37.443618 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.443623 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.443628 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-07-04 18:17:37.443632 | orchestrator | 2025-07-04 18:17:37.443637 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-07-04 18:17:37.443642 | orchestrator | Friday 04 July 2025 18:12:14 +0000 (0:00:00.385) 0:07:35.518 *********** 2025-07-04 18:17:37.443668 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-07-04 18:17:37.443674 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-07-04 18:17:37.443679 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-07-04 18:17:37.443684 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-07-04 18:17:37.443689 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:17:37.443693 | orchestrator | 2025-07-04 18:17:37.443698 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-07-04 18:17:37.443703 | orchestrator | Friday 04 July 2025 18:12:38 +0000 (0:00:24.114) 0:07:59.633 *********** 2025-07-04 18:17:37.443708 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:17:37.443712 | orchestrator | 2025-07-04 18:17:37.443717 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-07-04 18:17:37.443722 | orchestrator | Friday 04 July 2025 18:12:40 +0000 (0:00:01.582) 0:08:01.215 *********** 2025-07-04 18:17:37.443727 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.443731 | orchestrator | 2025-07-04 18:17:37.443740 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-07-04 18:17:37.443745 | orchestrator | Friday 04 July 2025 18:12:40 +0000 (0:00:00.818) 0:08:02.034 *********** 2025-07-04 18:17:37.443754 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.443759 | orchestrator | 2025-07-04 18:17:37.443764 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-07-04 18:17:37.443768 | orchestrator | Friday 04 July 2025 18:12:41 +0000 (0:00:00.125) 0:08:02.159 *********** 2025-07-04 18:17:37.443773 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-07-04 18:17:37.443778 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-07-04 18:17:37.443782 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-07-04 18:17:37.443787 | orchestrator | 2025-07-04 18:17:37.443792 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-07-04 18:17:37.443797 | orchestrator | Friday 04 July 2025 18:12:47 +0000 (0:00:06.386) 0:08:08.546 *********** 2025-07-04 18:17:37.443801 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-07-04 18:17:37.443806 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-07-04 18:17:37.443811 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-07-04 18:17:37.443815 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-07-04 18:17:37.443820 | orchestrator | 2025-07-04 18:17:37.443825 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-04 18:17:37.443830 | orchestrator | Friday 04 July 2025 18:12:52 +0000 (0:00:04.909) 0:08:13.455 *********** 2025-07-04 18:17:37.443834 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.443839 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.443844 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.443848 | orchestrator | 2025-07-04 18:17:37.443853 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-04 18:17:37.443858 | orchestrator | Friday 04 July 2025 18:12:53 +0000 (0:00:00.886) 0:08:14.342 *********** 2025-07-04 18:17:37.443863 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.443868 | orchestrator | 2025-07-04 18:17:37.443872 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-04 18:17:37.443877 | orchestrator | Friday 04 July 2025 18:12:53 +0000 (0:00:00.534) 0:08:14.876 *********** 2025-07-04 18:17:37.443882 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.443886 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.443891 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.443896 | orchestrator | 2025-07-04 18:17:37.443900 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-04 18:17:37.443905 | orchestrator | Friday 04 July 2025 18:12:54 +0000 (0:00:00.327) 0:08:15.204 *********** 2025-07-04 18:17:37.443910 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.443915 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.443919 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.443924 | orchestrator | 2025-07-04 18:17:37.443929 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-04 18:17:37.443933 | orchestrator | Friday 04 July 2025 18:12:55 +0000 (0:00:01.668) 0:08:16.872 *********** 2025-07-04 18:17:37.443938 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-04 18:17:37.443943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-04 18:17:37.443948 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-04 18:17:37.443952 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.443957 | orchestrator | 2025-07-04 18:17:37.443962 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-04 18:17:37.443966 | orchestrator | Friday 04 July 2025 18:12:56 +0000 (0:00:00.621) 0:08:17.494 *********** 2025-07-04 18:17:37.443971 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.443976 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.443981 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.443986 | orchestrator | 2025-07-04 18:17:37.443994 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-07-04 18:17:37.443999 | orchestrator | 2025-07-04 18:17:37.444003 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-04 18:17:37.444008 | orchestrator | Friday 04 July 2025 18:12:57 +0000 (0:00:00.639) 0:08:18.134 *********** 2025-07-04 18:17:37.444013 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.444018 | orchestrator | 2025-07-04 18:17:37.444039 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-04 18:17:37.444045 | orchestrator | Friday 04 July 2025 18:12:57 +0000 (0:00:00.734) 0:08:18.868 *********** 2025-07-04 18:17:37.444050 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.444054 | orchestrator | 2025-07-04 18:17:37.444059 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-04 18:17:37.444064 | orchestrator | Friday 04 July 2025 18:12:58 +0000 (0:00:00.566) 0:08:19.435 *********** 2025-07-04 18:17:37.444068 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.444073 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.444078 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.444082 | orchestrator | 2025-07-04 18:17:37.444087 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-04 18:17:37.444092 | orchestrator | Friday 04 July 2025 18:12:58 +0000 (0:00:00.290) 0:08:19.725 *********** 2025-07-04 18:17:37.444096 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.444101 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.444106 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.444110 | orchestrator | 2025-07-04 18:17:37.444115 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-04 18:17:37.444120 | orchestrator | Friday 04 July 2025 18:12:59 +0000 (0:00:00.977) 0:08:20.703 *********** 2025-07-04 18:17:37.444124 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.444129 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.444134 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.444138 | orchestrator | 2025-07-04 18:17:37.444143 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-04 18:17:37.444148 | orchestrator | Friday 04 July 2025 18:13:00 +0000 (0:00:00.769) 0:08:21.473 *********** 2025-07-04 18:17:37.444152 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.444196 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.444205 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.444213 | orchestrator | 2025-07-04 18:17:37.444222 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-04 18:17:37.444231 | orchestrator | Friday 04 July 2025 18:13:01 +0000 (0:00:00.775) 0:08:22.248 *********** 2025-07-04 18:17:37.444236 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.444240 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.444245 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.444249 | orchestrator | 2025-07-04 18:17:37.444254 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-04 18:17:37.444258 | orchestrator | Friday 04 July 2025 18:13:01 +0000 (0:00:00.302) 0:08:22.551 *********** 2025-07-04 18:17:37.444262 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.444267 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.444271 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.444276 | orchestrator | 2025-07-04 18:17:37.444280 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-04 18:17:37.444285 | orchestrator | Friday 04 July 2025 18:13:02 +0000 (0:00:00.562) 0:08:23.114 *********** 2025-07-04 18:17:37.444289 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.444293 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.444298 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.444302 | orchestrator | 2025-07-04 18:17:37.444307 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-04 18:17:37.444316 | orchestrator | Friday 04 July 2025 18:13:02 +0000 (0:00:00.300) 0:08:23.414 *********** 2025-07-04 18:17:37.444320 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.444325 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.444329 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.444333 | orchestrator | 2025-07-04 18:17:37.444338 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-04 18:17:37.444342 | orchestrator | Friday 04 July 2025 18:13:03 +0000 (0:00:00.726) 0:08:24.141 *********** 2025-07-04 18:17:37.444347 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.444351 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.444356 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.444360 | orchestrator | 2025-07-04 18:17:37.444364 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-04 18:17:37.444369 | orchestrator | Friday 04 July 2025 18:13:03 +0000 (0:00:00.816) 0:08:24.958 *********** 2025-07-04 18:17:37.444373 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.444378 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.444382 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.444387 | orchestrator | 2025-07-04 18:17:37.444391 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-04 18:17:37.444396 | orchestrator | Friday 04 July 2025 18:13:04 +0000 (0:00:00.560) 0:08:25.519 *********** 2025-07-04 18:17:37.444400 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.444404 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.444409 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.444413 | orchestrator | 2025-07-04 18:17:37.444421 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-04 18:17:37.444428 | orchestrator | Friday 04 July 2025 18:13:04 +0000 (0:00:00.320) 0:08:25.839 *********** 2025-07-04 18:17:37.444436 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.444443 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.444449 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.444456 | orchestrator | 2025-07-04 18:17:37.444463 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-04 18:17:37.444471 | orchestrator | Friday 04 July 2025 18:13:05 +0000 (0:00:00.353) 0:08:26.192 *********** 2025-07-04 18:17:37.444477 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.444484 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.444491 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.444498 | orchestrator | 2025-07-04 18:17:37.444505 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-04 18:17:37.444512 | orchestrator | Friday 04 July 2025 18:13:05 +0000 (0:00:00.346) 0:08:26.539 *********** 2025-07-04 18:17:37.444519 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.444525 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.444532 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.444539 | orchestrator | 2025-07-04 18:17:37.444547 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-04 18:17:37.444559 | orchestrator | Friday 04 July 2025 18:13:06 +0000 (0:00:00.620) 0:08:27.160 *********** 2025-07-04 18:17:37.444566 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.444574 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.444582 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.444589 | orchestrator | 2025-07-04 18:17:37.444596 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-04 18:17:37.444604 | orchestrator | Friday 04 July 2025 18:13:06 +0000 (0:00:00.312) 0:08:27.472 *********** 2025-07-04 18:17:37.444611 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.444619 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.444691 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.444714 | orchestrator | 2025-07-04 18:17:37.444722 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-04 18:17:37.444730 | orchestrator | Friday 04 July 2025 18:13:06 +0000 (0:00:00.333) 0:08:27.805 *********** 2025-07-04 18:17:37.444745 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.444752 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.444760 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.444768 | orchestrator | 2025-07-04 18:17:37.444775 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-04 18:17:37.444782 | orchestrator | Friday 04 July 2025 18:13:07 +0000 (0:00:00.318) 0:08:28.124 *********** 2025-07-04 18:17:37.444793 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.444801 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.444809 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.444816 | orchestrator | 2025-07-04 18:17:37.444823 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-04 18:17:37.444831 | orchestrator | Friday 04 July 2025 18:13:07 +0000 (0:00:00.583) 0:08:28.707 *********** 2025-07-04 18:17:37.444837 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.444844 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.444852 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.444860 | orchestrator | 2025-07-04 18:17:37.444868 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-07-04 18:17:37.444875 | orchestrator | Friday 04 July 2025 18:13:08 +0000 (0:00:00.622) 0:08:29.329 *********** 2025-07-04 18:17:37.444883 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.444890 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.444897 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.444905 | orchestrator | 2025-07-04 18:17:37.444910 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-07-04 18:17:37.444914 | orchestrator | Friday 04 July 2025 18:13:08 +0000 (0:00:00.341) 0:08:29.671 *********** 2025-07-04 18:17:37.444919 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-04 18:17:37.444923 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-04 18:17:37.444928 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-04 18:17:37.444932 | orchestrator | 2025-07-04 18:17:37.444937 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-07-04 18:17:37.444941 | orchestrator | Friday 04 July 2025 18:13:09 +0000 (0:00:00.896) 0:08:30.568 *********** 2025-07-04 18:17:37.444946 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.444950 | orchestrator | 2025-07-04 18:17:37.444955 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-07-04 18:17:37.444959 | orchestrator | Friday 04 July 2025 18:13:10 +0000 (0:00:00.798) 0:08:31.367 *********** 2025-07-04 18:17:37.444963 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.444968 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.444972 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.444977 | orchestrator | 2025-07-04 18:17:37.444981 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-07-04 18:17:37.444986 | orchestrator | Friday 04 July 2025 18:13:10 +0000 (0:00:00.325) 0:08:31.692 *********** 2025-07-04 18:17:37.444990 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.444994 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.444999 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.445003 | orchestrator | 2025-07-04 18:17:37.445008 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-07-04 18:17:37.445012 | orchestrator | Friday 04 July 2025 18:13:10 +0000 (0:00:00.306) 0:08:31.999 *********** 2025-07-04 18:17:37.445016 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.445021 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.445025 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.445030 | orchestrator | 2025-07-04 18:17:37.445034 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-07-04 18:17:37.445038 | orchestrator | Friday 04 July 2025 18:13:11 +0000 (0:00:00.902) 0:08:32.902 *********** 2025-07-04 18:17:37.445047 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.445052 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.445056 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.445061 | orchestrator | 2025-07-04 18:17:37.445065 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-07-04 18:17:37.445070 | orchestrator | Friday 04 July 2025 18:13:12 +0000 (0:00:00.329) 0:08:33.232 *********** 2025-07-04 18:17:37.445074 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-04 18:17:37.445079 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-04 18:17:37.445083 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-04 18:17:37.445088 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-04 18:17:37.445092 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-04 18:17:37.445097 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-04 18:17:37.445108 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-04 18:17:37.445113 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-04 18:17:37.445117 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-04 18:17:37.445122 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-04 18:17:37.445126 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-04 18:17:37.445131 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-04 18:17:37.445135 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-04 18:17:37.445140 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-04 18:17:37.445144 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-04 18:17:37.445148 | orchestrator | 2025-07-04 18:17:37.445153 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-07-04 18:17:37.445176 | orchestrator | Friday 04 July 2025 18:13:14 +0000 (0:00:02.318) 0:08:35.550 *********** 2025-07-04 18:17:37.445181 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.445186 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.445190 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.445195 | orchestrator | 2025-07-04 18:17:37.445199 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-07-04 18:17:37.445204 | orchestrator | Friday 04 July 2025 18:13:14 +0000 (0:00:00.305) 0:08:35.855 *********** 2025-07-04 18:17:37.445208 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.445213 | orchestrator | 2025-07-04 18:17:37.445218 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-07-04 18:17:37.445222 | orchestrator | Friday 04 July 2025 18:13:15 +0000 (0:00:00.812) 0:08:36.668 *********** 2025-07-04 18:17:37.445227 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-04 18:17:37.445231 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-04 18:17:37.445236 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-04 18:17:37.445240 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-07-04 18:17:37.445245 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-07-04 18:17:37.445249 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-07-04 18:17:37.445254 | orchestrator | 2025-07-04 18:17:37.445258 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-07-04 18:17:37.445266 | orchestrator | Friday 04 July 2025 18:13:16 +0000 (0:00:01.008) 0:08:37.676 *********** 2025-07-04 18:17:37.445271 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:17:37.445276 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-04 18:17:37.445280 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-04 18:17:37.445285 | orchestrator | 2025-07-04 18:17:37.445290 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-07-04 18:17:37.445294 | orchestrator | Friday 04 July 2025 18:13:18 +0000 (0:00:02.167) 0:08:39.844 *********** 2025-07-04 18:17:37.445299 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-04 18:17:37.445303 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-04 18:17:37.445308 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.445312 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-04 18:17:37.445317 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-04 18:17:37.445322 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.445326 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-04 18:17:37.445331 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-04 18:17:37.445335 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.445340 | orchestrator | 2025-07-04 18:17:37.445344 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-07-04 18:17:37.445349 | orchestrator | Friday 04 July 2025 18:13:20 +0000 (0:00:01.226) 0:08:41.070 *********** 2025-07-04 18:17:37.445353 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:17:37.445358 | orchestrator | 2025-07-04 18:17:37.445362 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-07-04 18:17:37.445367 | orchestrator | Friday 04 July 2025 18:13:22 +0000 (0:00:02.705) 0:08:43.775 *********** 2025-07-04 18:17:37.445372 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.445376 | orchestrator | 2025-07-04 18:17:37.445381 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-07-04 18:17:37.445385 | orchestrator | Friday 04 July 2025 18:13:23 +0000 (0:00:00.548) 0:08:44.324 *********** 2025-07-04 18:17:37.445390 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6', 'data_vg': 'ceph-a98224fe-e18a-5ddc-b2f0-6ffdc4d7e2d6'}) 2025-07-04 18:17:37.445396 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36', 'data_vg': 'ceph-32d6ac83-1783-5cc7-8f93-7bc92d6b2f36'}) 2025-07-04 18:17:37.445400 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0c11b362-ac03-5009-be6f-11a9ef5f18dc', 'data_vg': 'ceph-0c11b362-ac03-5009-be6f-11a9ef5f18dc'}) 2025-07-04 18:17:37.445408 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-38a85088-e19d-56c7-801b-f45e1c084bd2', 'data_vg': 'ceph-38a85088-e19d-56c7-801b-f45e1c084bd2'}) 2025-07-04 18:17:37.445413 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-50c65579-7f86-5010-a824-2221e6b8d3f0', 'data_vg': 'ceph-50c65579-7f86-5010-a824-2221e6b8d3f0'}) 2025-07-04 18:17:37.445417 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b396848d-3790-5c5a-8f8a-1e47b4270a43', 'data_vg': 'ceph-b396848d-3790-5c5a-8f8a-1e47b4270a43'}) 2025-07-04 18:17:37.445422 | orchestrator | 2025-07-04 18:17:37.445426 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-07-04 18:17:37.445431 | orchestrator | Friday 04 July 2025 18:14:08 +0000 (0:00:44.901) 0:09:29.225 *********** 2025-07-04 18:17:37.445435 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.445440 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.445444 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.445449 | orchestrator | 2025-07-04 18:17:37.445453 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-07-04 18:17:37.445464 | orchestrator | Friday 04 July 2025 18:14:08 +0000 (0:00:00.581) 0:09:29.806 *********** 2025-07-04 18:17:37.445471 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.445475 | orchestrator | 2025-07-04 18:17:37.445480 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-07-04 18:17:37.445484 | orchestrator | Friday 04 July 2025 18:14:09 +0000 (0:00:00.546) 0:09:30.352 *********** 2025-07-04 18:17:37.445489 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.445493 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.445498 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.445502 | orchestrator | 2025-07-04 18:17:37.445507 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-07-04 18:17:37.445512 | orchestrator | Friday 04 July 2025 18:14:09 +0000 (0:00:00.652) 0:09:31.005 *********** 2025-07-04 18:17:37.445516 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.445521 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.445525 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.445529 | orchestrator | 2025-07-04 18:17:37.445534 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-07-04 18:17:37.445538 | orchestrator | Friday 04 July 2025 18:14:12 +0000 (0:00:02.967) 0:09:33.972 *********** 2025-07-04 18:17:37.445543 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.445547 | orchestrator | 2025-07-04 18:17:37.445552 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-07-04 18:17:37.445556 | orchestrator | Friday 04 July 2025 18:14:13 +0000 (0:00:00.570) 0:09:34.542 *********** 2025-07-04 18:17:37.445561 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.445565 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.445570 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.445574 | orchestrator | 2025-07-04 18:17:37.445579 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-07-04 18:17:37.445583 | orchestrator | Friday 04 July 2025 18:14:14 +0000 (0:00:01.252) 0:09:35.795 *********** 2025-07-04 18:17:37.445587 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.445592 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.445597 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.445601 | orchestrator | 2025-07-04 18:17:37.445605 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-07-04 18:17:37.445610 | orchestrator | Friday 04 July 2025 18:14:16 +0000 (0:00:01.477) 0:09:37.272 *********** 2025-07-04 18:17:37.445614 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.445619 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.445623 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.445628 | orchestrator | 2025-07-04 18:17:37.445633 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-07-04 18:17:37.445637 | orchestrator | Friday 04 July 2025 18:14:17 +0000 (0:00:01.749) 0:09:39.022 *********** 2025-07-04 18:17:37.445642 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.445646 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.445651 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.445655 | orchestrator | 2025-07-04 18:17:37.445660 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-07-04 18:17:37.445664 | orchestrator | Friday 04 July 2025 18:14:18 +0000 (0:00:00.352) 0:09:39.375 *********** 2025-07-04 18:17:37.445669 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.445673 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.445677 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.445682 | orchestrator | 2025-07-04 18:17:37.445686 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-07-04 18:17:37.445691 | orchestrator | Friday 04 July 2025 18:14:18 +0000 (0:00:00.331) 0:09:39.706 *********** 2025-07-04 18:17:37.445695 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-04 18:17:37.445703 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-07-04 18:17:37.445708 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-07-04 18:17:37.445712 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-07-04 18:17:37.445717 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-07-04 18:17:37.445721 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-07-04 18:17:37.445726 | orchestrator | 2025-07-04 18:17:37.445730 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-07-04 18:17:37.445735 | orchestrator | Friday 04 July 2025 18:14:19 +0000 (0:00:01.305) 0:09:41.012 *********** 2025-07-04 18:17:37.445739 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-07-04 18:17:37.445743 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-07-04 18:17:37.445748 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-07-04 18:17:37.445752 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-07-04 18:17:37.445757 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-07-04 18:17:37.445761 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-07-04 18:17:37.445765 | orchestrator | 2025-07-04 18:17:37.445773 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-07-04 18:17:37.445777 | orchestrator | Friday 04 July 2025 18:14:22 +0000 (0:00:02.216) 0:09:43.229 *********** 2025-07-04 18:17:37.445782 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-07-04 18:17:37.445786 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-07-04 18:17:37.445791 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-07-04 18:17:37.445795 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-07-04 18:17:37.445800 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-07-04 18:17:37.445804 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-07-04 18:17:37.445809 | orchestrator | 2025-07-04 18:17:37.445813 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-07-04 18:17:37.445818 | orchestrator | Friday 04 July 2025 18:14:26 +0000 (0:00:04.687) 0:09:47.917 *********** 2025-07-04 18:17:37.445822 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.445827 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.445831 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:17:37.445836 | orchestrator | 2025-07-04 18:17:37.445840 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-07-04 18:17:37.445845 | orchestrator | Friday 04 July 2025 18:14:29 +0000 (0:00:02.855) 0:09:50.772 *********** 2025-07-04 18:17:37.445849 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.445856 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.445861 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-07-04 18:17:37.445865 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:17:37.445870 | orchestrator | 2025-07-04 18:17:37.445874 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-07-04 18:17:37.445879 | orchestrator | Friday 04 July 2025 18:14:42 +0000 (0:00:13.069) 0:10:03.842 *********** 2025-07-04 18:17:37.445883 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.445888 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.445892 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.445897 | orchestrator | 2025-07-04 18:17:37.445901 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-04 18:17:37.445906 | orchestrator | Friday 04 July 2025 18:14:43 +0000 (0:00:00.866) 0:10:04.709 *********** 2025-07-04 18:17:37.445910 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.445915 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.445919 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.445923 | orchestrator | 2025-07-04 18:17:37.445928 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-04 18:17:37.445932 | orchestrator | Friday 04 July 2025 18:14:44 +0000 (0:00:00.601) 0:10:05.311 *********** 2025-07-04 18:17:37.445941 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.445945 | orchestrator | 2025-07-04 18:17:37.445950 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-04 18:17:37.445954 | orchestrator | Friday 04 July 2025 18:14:44 +0000 (0:00:00.552) 0:10:05.863 *********** 2025-07-04 18:17:37.445959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.445963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.445968 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.445972 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.445977 | orchestrator | 2025-07-04 18:17:37.445981 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-04 18:17:37.445986 | orchestrator | Friday 04 July 2025 18:14:45 +0000 (0:00:00.457) 0:10:06.321 *********** 2025-07-04 18:17:37.445990 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.445995 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.445999 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.446003 | orchestrator | 2025-07-04 18:17:37.446008 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-04 18:17:37.446012 | orchestrator | Friday 04 July 2025 18:14:45 +0000 (0:00:00.315) 0:10:06.637 *********** 2025-07-04 18:17:37.446047 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446052 | orchestrator | 2025-07-04 18:17:37.446056 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-04 18:17:37.446061 | orchestrator | Friday 04 July 2025 18:14:45 +0000 (0:00:00.243) 0:10:06.880 *********** 2025-07-04 18:17:37.446065 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446069 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.446074 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.446078 | orchestrator | 2025-07-04 18:17:37.446083 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-04 18:17:37.446087 | orchestrator | Friday 04 July 2025 18:14:46 +0000 (0:00:00.645) 0:10:07.526 *********** 2025-07-04 18:17:37.446092 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446096 | orchestrator | 2025-07-04 18:17:37.446101 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-04 18:17:37.446105 | orchestrator | Friday 04 July 2025 18:14:46 +0000 (0:00:00.219) 0:10:07.745 *********** 2025-07-04 18:17:37.446110 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446114 | orchestrator | 2025-07-04 18:17:37.446119 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-04 18:17:37.446123 | orchestrator | Friday 04 July 2025 18:14:46 +0000 (0:00:00.238) 0:10:07.984 *********** 2025-07-04 18:17:37.446128 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446132 | orchestrator | 2025-07-04 18:17:37.446137 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-04 18:17:37.446141 | orchestrator | Friday 04 July 2025 18:14:47 +0000 (0:00:00.148) 0:10:08.133 *********** 2025-07-04 18:17:37.446146 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446150 | orchestrator | 2025-07-04 18:17:37.446154 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-04 18:17:37.446194 | orchestrator | Friday 04 July 2025 18:14:47 +0000 (0:00:00.259) 0:10:08.392 *********** 2025-07-04 18:17:37.446202 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446207 | orchestrator | 2025-07-04 18:17:37.446211 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-04 18:17:37.446216 | orchestrator | Friday 04 July 2025 18:14:47 +0000 (0:00:00.215) 0:10:08.608 *********** 2025-07-04 18:17:37.446220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.446225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.446229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.446234 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446242 | orchestrator | 2025-07-04 18:17:37.446246 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-04 18:17:37.446251 | orchestrator | Friday 04 July 2025 18:14:47 +0000 (0:00:00.394) 0:10:09.003 *********** 2025-07-04 18:17:37.446255 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446260 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.446264 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.446269 | orchestrator | 2025-07-04 18:17:37.446273 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-04 18:17:37.446277 | orchestrator | Friday 04 July 2025 18:14:48 +0000 (0:00:00.307) 0:10:09.310 *********** 2025-07-04 18:17:37.446282 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446286 | orchestrator | 2025-07-04 18:17:37.446295 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-04 18:17:37.446299 | orchestrator | Friday 04 July 2025 18:14:49 +0000 (0:00:00.870) 0:10:10.181 *********** 2025-07-04 18:17:37.446304 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446308 | orchestrator | 2025-07-04 18:17:37.446312 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-07-04 18:17:37.446317 | orchestrator | 2025-07-04 18:17:37.446321 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-04 18:17:37.446326 | orchestrator | Friday 04 July 2025 18:14:49 +0000 (0:00:00.681) 0:10:10.862 *********** 2025-07-04 18:17:37.446330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.446335 | orchestrator | 2025-07-04 18:17:37.446340 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-04 18:17:37.446344 | orchestrator | Friday 04 July 2025 18:14:51 +0000 (0:00:01.301) 0:10:12.164 *********** 2025-07-04 18:17:37.446349 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.446353 | orchestrator | 2025-07-04 18:17:37.446358 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-04 18:17:37.446362 | orchestrator | Friday 04 July 2025 18:14:52 +0000 (0:00:01.317) 0:10:13.482 *********** 2025-07-04 18:17:37.446367 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446371 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.446376 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.446380 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.446384 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.446389 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.446393 | orchestrator | 2025-07-04 18:17:37.446398 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-04 18:17:37.446402 | orchestrator | Friday 04 July 2025 18:14:53 +0000 (0:00:01.284) 0:10:14.766 *********** 2025-07-04 18:17:37.446406 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.446411 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.446415 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.446420 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.446424 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.446429 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.446433 | orchestrator | 2025-07-04 18:17:37.446437 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-04 18:17:37.446442 | orchestrator | Friday 04 July 2025 18:14:54 +0000 (0:00:00.694) 0:10:15.461 *********** 2025-07-04 18:17:37.446446 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.446451 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.446455 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.446459 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.446464 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.446468 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.446476 | orchestrator | 2025-07-04 18:17:37.446480 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-04 18:17:37.446485 | orchestrator | Friday 04 July 2025 18:14:55 +0000 (0:00:00.932) 0:10:16.393 *********** 2025-07-04 18:17:37.446489 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.446494 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.446498 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.446503 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.446507 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.446512 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.446516 | orchestrator | 2025-07-04 18:17:37.446521 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-04 18:17:37.446525 | orchestrator | Friday 04 July 2025 18:14:56 +0000 (0:00:00.806) 0:10:17.200 *********** 2025-07-04 18:17:37.446530 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446534 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.446538 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.446542 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.446546 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.446550 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.446554 | orchestrator | 2025-07-04 18:17:37.446558 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-04 18:17:37.446562 | orchestrator | Friday 04 July 2025 18:14:57 +0000 (0:00:01.314) 0:10:18.514 *********** 2025-07-04 18:17:37.446566 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446570 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.446574 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.446578 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.446582 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.446588 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.446592 | orchestrator | 2025-07-04 18:17:37.446596 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-04 18:17:37.446600 | orchestrator | Friday 04 July 2025 18:14:58 +0000 (0:00:00.678) 0:10:19.193 *********** 2025-07-04 18:17:37.446604 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446608 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.446612 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.446616 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.446620 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.446624 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.446628 | orchestrator | 2025-07-04 18:17:37.446632 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-04 18:17:37.446636 | orchestrator | Friday 04 July 2025 18:14:59 +0000 (0:00:00.879) 0:10:20.072 *********** 2025-07-04 18:17:37.446640 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.446644 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.446648 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.446652 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.446656 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.446660 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.446664 | orchestrator | 2025-07-04 18:17:37.446668 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-04 18:17:37.446676 | orchestrator | Friday 04 July 2025 18:15:00 +0000 (0:00:01.022) 0:10:21.095 *********** 2025-07-04 18:17:37.446680 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.446684 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.446688 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.446692 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.446696 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.446700 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.446704 | orchestrator | 2025-07-04 18:17:37.446708 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-04 18:17:37.446712 | orchestrator | Friday 04 July 2025 18:15:01 +0000 (0:00:01.413) 0:10:22.508 *********** 2025-07-04 18:17:37.446716 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446725 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.446729 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.446733 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.446737 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.446741 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.446745 | orchestrator | 2025-07-04 18:17:37.446749 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-04 18:17:37.446753 | orchestrator | Friday 04 July 2025 18:15:02 +0000 (0:00:00.713) 0:10:23.222 *********** 2025-07-04 18:17:37.446757 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446761 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.446765 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.446769 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.446773 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.446777 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.446781 | orchestrator | 2025-07-04 18:17:37.446785 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-04 18:17:37.446789 | orchestrator | Friday 04 July 2025 18:15:03 +0000 (0:00:00.859) 0:10:24.082 *********** 2025-07-04 18:17:37.446793 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.446797 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.446801 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.446805 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.446809 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.446813 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.446817 | orchestrator | 2025-07-04 18:17:37.446821 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-04 18:17:37.446825 | orchestrator | Friday 04 July 2025 18:15:03 +0000 (0:00:00.670) 0:10:24.752 *********** 2025-07-04 18:17:37.446829 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.446833 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.446837 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.446841 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.446845 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.446849 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.446853 | orchestrator | 2025-07-04 18:17:37.446858 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-04 18:17:37.446862 | orchestrator | Friday 04 July 2025 18:15:04 +0000 (0:00:00.888) 0:10:25.641 *********** 2025-07-04 18:17:37.446866 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.446870 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.446874 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.446878 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.446882 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.446886 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.446890 | orchestrator | 2025-07-04 18:17:37.446894 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-04 18:17:37.446898 | orchestrator | Friday 04 July 2025 18:15:05 +0000 (0:00:00.646) 0:10:26.287 *********** 2025-07-04 18:17:37.446902 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446906 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.446910 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.446914 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.446918 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.446922 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.446926 | orchestrator | 2025-07-04 18:17:37.446930 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-04 18:17:37.446934 | orchestrator | Friday 04 July 2025 18:15:06 +0000 (0:00:00.932) 0:10:27.220 *********** 2025-07-04 18:17:37.446938 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446942 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.446946 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.446950 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:37.446954 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:37.446961 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:37.446965 | orchestrator | 2025-07-04 18:17:37.446969 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-04 18:17:37.446973 | orchestrator | Friday 04 July 2025 18:15:06 +0000 (0:00:00.624) 0:10:27.844 *********** 2025-07-04 18:17:37.446977 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.446981 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.446985 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.446989 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.446996 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.447000 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.447004 | orchestrator | 2025-07-04 18:17:37.447008 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-04 18:17:37.447012 | orchestrator | Friday 04 July 2025 18:15:07 +0000 (0:00:00.927) 0:10:28.771 *********** 2025-07-04 18:17:37.447016 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447020 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447024 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447028 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.447032 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.447036 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.447040 | orchestrator | 2025-07-04 18:17:37.447044 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-04 18:17:37.447048 | orchestrator | Friday 04 July 2025 18:15:08 +0000 (0:00:00.752) 0:10:29.524 *********** 2025-07-04 18:17:37.447052 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447056 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447060 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447064 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.447068 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.447072 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.447076 | orchestrator | 2025-07-04 18:17:37.447080 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-07-04 18:17:37.447087 | orchestrator | Friday 04 July 2025 18:15:09 +0000 (0:00:01.499) 0:10:31.023 *********** 2025-07-04 18:17:37.447091 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:17:37.447095 | orchestrator | 2025-07-04 18:17:37.447099 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-07-04 18:17:37.447103 | orchestrator | Friday 04 July 2025 18:15:13 +0000 (0:00:04.015) 0:10:35.039 *********** 2025-07-04 18:17:37.447107 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:17:37.447111 | orchestrator | 2025-07-04 18:17:37.447115 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-07-04 18:17:37.447119 | orchestrator | Friday 04 July 2025 18:15:16 +0000 (0:00:02.145) 0:10:37.184 *********** 2025-07-04 18:17:37.447123 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.447127 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.447131 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.447135 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.447139 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.447143 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.447147 | orchestrator | 2025-07-04 18:17:37.447151 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-07-04 18:17:37.447169 | orchestrator | Friday 04 July 2025 18:15:18 +0000 (0:00:02.297) 0:10:39.481 *********** 2025-07-04 18:17:37.447173 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.447177 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.447181 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.447185 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.447190 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.447194 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.447198 | orchestrator | 2025-07-04 18:17:37.447202 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-07-04 18:17:37.447209 | orchestrator | Friday 04 July 2025 18:15:19 +0000 (0:00:01.118) 0:10:40.600 *********** 2025-07-04 18:17:37.447213 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.447218 | orchestrator | 2025-07-04 18:17:37.447223 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-07-04 18:17:37.447227 | orchestrator | Friday 04 July 2025 18:15:21 +0000 (0:00:01.652) 0:10:42.252 *********** 2025-07-04 18:17:37.447231 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.447235 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.447239 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.447243 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.447247 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.447251 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.447255 | orchestrator | 2025-07-04 18:17:37.447259 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-07-04 18:17:37.447263 | orchestrator | Friday 04 July 2025 18:15:23 +0000 (0:00:02.038) 0:10:44.290 *********** 2025-07-04 18:17:37.447267 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.447271 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.447275 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.447279 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.447283 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.447287 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.447291 | orchestrator | 2025-07-04 18:17:37.447295 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-07-04 18:17:37.447299 | orchestrator | Friday 04 July 2025 18:15:27 +0000 (0:00:03.891) 0:10:48.181 *********** 2025-07-04 18:17:37.447304 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:37.447308 | orchestrator | 2025-07-04 18:17:37.447312 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-07-04 18:17:37.447316 | orchestrator | Friday 04 July 2025 18:15:28 +0000 (0:00:01.736) 0:10:49.918 *********** 2025-07-04 18:17:37.447320 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447324 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447328 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447332 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.447336 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.447340 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.447344 | orchestrator | 2025-07-04 18:17:37.447348 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-07-04 18:17:37.447352 | orchestrator | Friday 04 July 2025 18:15:29 +0000 (0:00:00.951) 0:10:50.870 *********** 2025-07-04 18:17:37.447356 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.447360 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.447364 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.447368 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:37.447375 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:37.447379 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:37.447383 | orchestrator | 2025-07-04 18:17:37.447387 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-07-04 18:17:37.447391 | orchestrator | Friday 04 July 2025 18:15:32 +0000 (0:00:02.434) 0:10:53.305 *********** 2025-07-04 18:17:37.447395 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447399 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447403 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447407 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:37.447411 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:37.447415 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:37.447419 | orchestrator | 2025-07-04 18:17:37.447423 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-07-04 18:17:37.447427 | orchestrator | 2025-07-04 18:17:37.447435 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-04 18:17:37.447439 | orchestrator | Friday 04 July 2025 18:15:33 +0000 (0:00:01.121) 0:10:54.426 *********** 2025-07-04 18:17:37.447443 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.447447 | orchestrator | 2025-07-04 18:17:37.447451 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-04 18:17:37.447457 | orchestrator | Friday 04 July 2025 18:15:33 +0000 (0:00:00.511) 0:10:54.938 *********** 2025-07-04 18:17:37.447461 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.447465 | orchestrator | 2025-07-04 18:17:37.447469 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-04 18:17:37.447473 | orchestrator | Friday 04 July 2025 18:15:34 +0000 (0:00:00.798) 0:10:55.737 *********** 2025-07-04 18:17:37.447477 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.447481 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.447486 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.447490 | orchestrator | 2025-07-04 18:17:37.447494 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-04 18:17:37.447498 | orchestrator | Friday 04 July 2025 18:15:35 +0000 (0:00:00.341) 0:10:56.078 *********** 2025-07-04 18:17:37.447502 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447506 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447510 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447514 | orchestrator | 2025-07-04 18:17:37.447518 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-04 18:17:37.447522 | orchestrator | Friday 04 July 2025 18:15:35 +0000 (0:00:00.667) 0:10:56.746 *********** 2025-07-04 18:17:37.447526 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447530 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447534 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447538 | orchestrator | 2025-07-04 18:17:37.447542 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-04 18:17:37.447546 | orchestrator | Friday 04 July 2025 18:15:36 +0000 (0:00:01.014) 0:10:57.760 *********** 2025-07-04 18:17:37.447550 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447554 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447558 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447562 | orchestrator | 2025-07-04 18:17:37.447566 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-04 18:17:37.447570 | orchestrator | Friday 04 July 2025 18:15:37 +0000 (0:00:00.716) 0:10:58.477 *********** 2025-07-04 18:17:37.447574 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.447578 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.447582 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.447586 | orchestrator | 2025-07-04 18:17:37.447591 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-04 18:17:37.447595 | orchestrator | Friday 04 July 2025 18:15:37 +0000 (0:00:00.344) 0:10:58.822 *********** 2025-07-04 18:17:37.447602 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.447608 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.447614 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.447621 | orchestrator | 2025-07-04 18:17:37.447627 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-04 18:17:37.447634 | orchestrator | Friday 04 July 2025 18:15:38 +0000 (0:00:00.313) 0:10:59.135 *********** 2025-07-04 18:17:37.447640 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.447647 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.447653 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.447659 | orchestrator | 2025-07-04 18:17:37.447666 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-04 18:17:37.447672 | orchestrator | Friday 04 July 2025 18:15:38 +0000 (0:00:00.626) 0:10:59.761 *********** 2025-07-04 18:17:37.447682 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447689 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447695 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447702 | orchestrator | 2025-07-04 18:17:37.447707 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-04 18:17:37.447712 | orchestrator | Friday 04 July 2025 18:15:39 +0000 (0:00:00.725) 0:11:00.486 *********** 2025-07-04 18:17:37.447716 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447720 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447724 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447728 | orchestrator | 2025-07-04 18:17:37.447732 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-04 18:17:37.447736 | orchestrator | Friday 04 July 2025 18:15:40 +0000 (0:00:00.733) 0:11:01.220 *********** 2025-07-04 18:17:37.447740 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.447744 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.447748 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.447751 | orchestrator | 2025-07-04 18:17:37.447755 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-04 18:17:37.447759 | orchestrator | Friday 04 July 2025 18:15:40 +0000 (0:00:00.332) 0:11:01.552 *********** 2025-07-04 18:17:37.447763 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.447767 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.447771 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.447775 | orchestrator | 2025-07-04 18:17:37.447783 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-04 18:17:37.447787 | orchestrator | Friday 04 July 2025 18:15:41 +0000 (0:00:00.632) 0:11:02.185 *********** 2025-07-04 18:17:37.447791 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447795 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447799 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447803 | orchestrator | 2025-07-04 18:17:37.447807 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-04 18:17:37.447811 | orchestrator | Friday 04 July 2025 18:15:41 +0000 (0:00:00.377) 0:11:02.562 *********** 2025-07-04 18:17:37.447815 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447819 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447823 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447827 | orchestrator | 2025-07-04 18:17:37.447831 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-04 18:17:37.447835 | orchestrator | Friday 04 July 2025 18:15:41 +0000 (0:00:00.340) 0:11:02.902 *********** 2025-07-04 18:17:37.447839 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447843 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447847 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447851 | orchestrator | 2025-07-04 18:17:37.447855 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-04 18:17:37.447859 | orchestrator | Friday 04 July 2025 18:15:42 +0000 (0:00:00.331) 0:11:03.234 *********** 2025-07-04 18:17:37.447866 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.447870 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.447874 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.447878 | orchestrator | 2025-07-04 18:17:37.447882 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-04 18:17:37.447886 | orchestrator | Friday 04 July 2025 18:15:42 +0000 (0:00:00.614) 0:11:03.849 *********** 2025-07-04 18:17:37.447890 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.447894 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.447898 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.447902 | orchestrator | 2025-07-04 18:17:37.447906 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-04 18:17:37.447910 | orchestrator | Friday 04 July 2025 18:15:43 +0000 (0:00:00.308) 0:11:04.158 *********** 2025-07-04 18:17:37.447914 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.447918 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.447925 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.447929 | orchestrator | 2025-07-04 18:17:37.447933 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-04 18:17:37.447937 | orchestrator | Friday 04 July 2025 18:15:43 +0000 (0:00:00.311) 0:11:04.469 *********** 2025-07-04 18:17:37.447942 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447946 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447950 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447954 | orchestrator | 2025-07-04 18:17:37.447958 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-04 18:17:37.447962 | orchestrator | Friday 04 July 2025 18:15:43 +0000 (0:00:00.366) 0:11:04.836 *********** 2025-07-04 18:17:37.447966 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.447970 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.447974 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.447977 | orchestrator | 2025-07-04 18:17:37.447981 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-07-04 18:17:37.447986 | orchestrator | Friday 04 July 2025 18:15:44 +0000 (0:00:00.843) 0:11:05.679 *********** 2025-07-04 18:17:37.447990 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.447993 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.447997 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-07-04 18:17:37.448002 | orchestrator | 2025-07-04 18:17:37.448005 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-07-04 18:17:37.448009 | orchestrator | Friday 04 July 2025 18:15:45 +0000 (0:00:00.463) 0:11:06.143 *********** 2025-07-04 18:17:37.448014 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:17:37.448018 | orchestrator | 2025-07-04 18:17:37.448022 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-07-04 18:17:37.448025 | orchestrator | Friday 04 July 2025 18:15:47 +0000 (0:00:02.326) 0:11:08.470 *********** 2025-07-04 18:17:37.448031 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-07-04 18:17:37.448037 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.448041 | orchestrator | 2025-07-04 18:17:37.448045 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-07-04 18:17:37.448049 | orchestrator | Friday 04 July 2025 18:15:47 +0000 (0:00:00.231) 0:11:08.701 *********** 2025-07-04 18:17:37.448054 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-04 18:17:37.448063 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-04 18:17:37.448068 | orchestrator | 2025-07-04 18:17:37.448072 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-07-04 18:17:37.448076 | orchestrator | Friday 04 July 2025 18:15:56 +0000 (0:00:08.409) 0:11:17.110 *********** 2025-07-04 18:17:37.448080 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:17:37.448084 | orchestrator | 2025-07-04 18:17:37.448090 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-07-04 18:17:37.448094 | orchestrator | Friday 04 July 2025 18:15:59 +0000 (0:00:03.555) 0:11:20.666 *********** 2025-07-04 18:17:37.448098 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.448102 | orchestrator | 2025-07-04 18:17:37.448106 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-07-04 18:17:37.448115 | orchestrator | Friday 04 July 2025 18:16:00 +0000 (0:00:00.543) 0:11:21.209 *********** 2025-07-04 18:17:37.448119 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-04 18:17:37.448123 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-04 18:17:37.448127 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-04 18:17:37.448131 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-07-04 18:17:37.448138 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-07-04 18:17:37.448145 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-07-04 18:17:37.448151 | orchestrator | 2025-07-04 18:17:37.448171 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-07-04 18:17:37.448178 | orchestrator | Friday 04 July 2025 18:16:01 +0000 (0:00:00.987) 0:11:22.197 *********** 2025-07-04 18:17:37.448185 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:17:37.448191 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-04 18:17:37.448258 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-04 18:17:37.448263 | orchestrator | 2025-07-04 18:17:37.448267 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-07-04 18:17:37.448271 | orchestrator | Friday 04 July 2025 18:16:03 +0000 (0:00:02.323) 0:11:24.521 *********** 2025-07-04 18:17:37.448275 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-04 18:17:37.448280 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-04 18:17:37.448284 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.448288 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-04 18:17:37.448292 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-04 18:17:37.448296 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.448300 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-04 18:17:37.448304 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-04 18:17:37.448308 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.448312 | orchestrator | 2025-07-04 18:17:37.448316 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-07-04 18:17:37.448320 | orchestrator | Friday 04 July 2025 18:16:05 +0000 (0:00:01.548) 0:11:26.069 *********** 2025-07-04 18:17:37.448324 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.448328 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.448332 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.448336 | orchestrator | 2025-07-04 18:17:37.448340 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-07-04 18:17:37.448344 | orchestrator | Friday 04 July 2025 18:16:07 +0000 (0:00:02.713) 0:11:28.783 *********** 2025-07-04 18:17:37.448349 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.448353 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.448357 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.448361 | orchestrator | 2025-07-04 18:17:37.448365 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-07-04 18:17:37.448369 | orchestrator | Friday 04 July 2025 18:16:08 +0000 (0:00:00.328) 0:11:29.111 *********** 2025-07-04 18:17:37.448373 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.448377 | orchestrator | 2025-07-04 18:17:37.448381 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-07-04 18:17:37.448385 | orchestrator | Friday 04 July 2025 18:16:08 +0000 (0:00:00.832) 0:11:29.943 *********** 2025-07-04 18:17:37.448389 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.448393 | orchestrator | 2025-07-04 18:17:37.448397 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-07-04 18:17:37.448406 | orchestrator | Friday 04 July 2025 18:16:09 +0000 (0:00:00.544) 0:11:30.488 *********** 2025-07-04 18:17:37.448411 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.448415 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.448419 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.448423 | orchestrator | 2025-07-04 18:17:37.448427 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-07-04 18:17:37.448431 | orchestrator | Friday 04 July 2025 18:16:10 +0000 (0:00:01.341) 0:11:31.830 *********** 2025-07-04 18:17:37.448435 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.448439 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.448443 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.448447 | orchestrator | 2025-07-04 18:17:37.448451 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-07-04 18:17:37.448455 | orchestrator | Friday 04 July 2025 18:16:12 +0000 (0:00:01.623) 0:11:33.453 *********** 2025-07-04 18:17:37.448459 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.448463 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.448467 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.448471 | orchestrator | 2025-07-04 18:17:37.448475 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-07-04 18:17:37.448480 | orchestrator | Friday 04 July 2025 18:16:14 +0000 (0:00:01.938) 0:11:35.391 *********** 2025-07-04 18:17:37.448484 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.448488 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.448492 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.448496 | orchestrator | 2025-07-04 18:17:37.448503 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-07-04 18:17:37.448508 | orchestrator | Friday 04 July 2025 18:16:16 +0000 (0:00:02.154) 0:11:37.546 *********** 2025-07-04 18:17:37.448512 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.448516 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.448520 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.448524 | orchestrator | 2025-07-04 18:17:37.448528 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-04 18:17:37.448532 | orchestrator | Friday 04 July 2025 18:16:18 +0000 (0:00:01.849) 0:11:39.396 *********** 2025-07-04 18:17:37.448536 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.448540 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.448544 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.448548 | orchestrator | 2025-07-04 18:17:37.448552 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-04 18:17:37.448556 | orchestrator | Friday 04 July 2025 18:16:19 +0000 (0:00:00.797) 0:11:40.193 *********** 2025-07-04 18:17:37.448560 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.448564 | orchestrator | 2025-07-04 18:17:37.448568 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-04 18:17:37.448572 | orchestrator | Friday 04 July 2025 18:16:19 +0000 (0:00:00.819) 0:11:41.013 *********** 2025-07-04 18:17:37.448580 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.448584 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.448588 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.448592 | orchestrator | 2025-07-04 18:17:37.448596 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-04 18:17:37.448600 | orchestrator | Friday 04 July 2025 18:16:20 +0000 (0:00:00.353) 0:11:41.367 *********** 2025-07-04 18:17:37.448604 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.448608 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.448612 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.448617 | orchestrator | 2025-07-04 18:17:37.448621 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-04 18:17:37.448625 | orchestrator | Friday 04 July 2025 18:16:21 +0000 (0:00:01.266) 0:11:42.633 *********** 2025-07-04 18:17:37.448633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.448637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.448641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.448645 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.448649 | orchestrator | 2025-07-04 18:17:37.448653 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-04 18:17:37.448657 | orchestrator | Friday 04 July 2025 18:16:22 +0000 (0:00:01.377) 0:11:44.011 *********** 2025-07-04 18:17:37.448661 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.448665 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.448669 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.448673 | orchestrator | 2025-07-04 18:17:37.448677 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-04 18:17:37.448681 | orchestrator | 2025-07-04 18:17:37.448685 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-04 18:17:37.448689 | orchestrator | Friday 04 July 2025 18:16:23 +0000 (0:00:00.846) 0:11:44.858 *********** 2025-07-04 18:17:37.448693 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.448697 | orchestrator | 2025-07-04 18:17:37.448702 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-04 18:17:37.448706 | orchestrator | Friday 04 July 2025 18:16:24 +0000 (0:00:00.508) 0:11:45.366 *********** 2025-07-04 18:17:37.448710 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.448714 | orchestrator | 2025-07-04 18:17:37.448718 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-04 18:17:37.448722 | orchestrator | Friday 04 July 2025 18:16:25 +0000 (0:00:00.783) 0:11:46.149 *********** 2025-07-04 18:17:37.448726 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.448730 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.448734 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.448738 | orchestrator | 2025-07-04 18:17:37.448742 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-04 18:17:37.448746 | orchestrator | Friday 04 July 2025 18:16:25 +0000 (0:00:00.389) 0:11:46.539 *********** 2025-07-04 18:17:37.448750 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.448754 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.448758 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.448762 | orchestrator | 2025-07-04 18:17:37.448766 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-04 18:17:37.448770 | orchestrator | Friday 04 July 2025 18:16:26 +0000 (0:00:00.740) 0:11:47.280 *********** 2025-07-04 18:17:37.448774 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.448778 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.448782 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.448786 | orchestrator | 2025-07-04 18:17:37.448790 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-04 18:17:37.448794 | orchestrator | Friday 04 July 2025 18:16:27 +0000 (0:00:00.777) 0:11:48.057 *********** 2025-07-04 18:17:37.448799 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.448811 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.448816 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.448820 | orchestrator | 2025-07-04 18:17:37.448824 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-04 18:17:37.448828 | orchestrator | Friday 04 July 2025 18:16:28 +0000 (0:00:01.131) 0:11:49.188 *********** 2025-07-04 18:17:37.448832 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.448836 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.448840 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.448844 | orchestrator | 2025-07-04 18:17:37.448848 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-04 18:17:37.448858 | orchestrator | Friday 04 July 2025 18:16:28 +0000 (0:00:00.312) 0:11:49.501 *********** 2025-07-04 18:17:37.448865 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.448869 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.448873 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.448877 | orchestrator | 2025-07-04 18:17:37.448881 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-04 18:17:37.448885 | orchestrator | Friday 04 July 2025 18:16:28 +0000 (0:00:00.314) 0:11:49.816 *********** 2025-07-04 18:17:37.448889 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.448893 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.448897 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.448901 | orchestrator | 2025-07-04 18:17:37.448906 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-04 18:17:37.448910 | orchestrator | Friday 04 July 2025 18:16:29 +0000 (0:00:00.313) 0:11:50.129 *********** 2025-07-04 18:17:37.448914 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.448918 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.448922 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.448926 | orchestrator | 2025-07-04 18:17:37.448930 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-04 18:17:37.448934 | orchestrator | Friday 04 July 2025 18:16:30 +0000 (0:00:01.126) 0:11:51.255 *********** 2025-07-04 18:17:37.448938 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.448942 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.448946 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.448950 | orchestrator | 2025-07-04 18:17:37.448956 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-04 18:17:37.448960 | orchestrator | Friday 04 July 2025 18:16:30 +0000 (0:00:00.720) 0:11:51.975 *********** 2025-07-04 18:17:37.448964 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.448968 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.448972 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.448977 | orchestrator | 2025-07-04 18:17:37.448981 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-04 18:17:37.448985 | orchestrator | Friday 04 July 2025 18:16:31 +0000 (0:00:00.279) 0:11:52.255 *********** 2025-07-04 18:17:37.448989 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.448993 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.448997 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.449001 | orchestrator | 2025-07-04 18:17:37.449005 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-04 18:17:37.449009 | orchestrator | Friday 04 July 2025 18:16:31 +0000 (0:00:00.324) 0:11:52.579 *********** 2025-07-04 18:17:37.449013 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.449017 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.449021 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.449025 | orchestrator | 2025-07-04 18:17:37.449029 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-04 18:17:37.449033 | orchestrator | Friday 04 July 2025 18:16:32 +0000 (0:00:00.585) 0:11:53.164 *********** 2025-07-04 18:17:37.449037 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.449041 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.449045 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.449049 | orchestrator | 2025-07-04 18:17:37.449053 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-04 18:17:37.449057 | orchestrator | Friday 04 July 2025 18:16:32 +0000 (0:00:00.355) 0:11:53.520 *********** 2025-07-04 18:17:37.449062 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.449065 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.449069 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.449073 | orchestrator | 2025-07-04 18:17:37.449078 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-04 18:17:37.449082 | orchestrator | Friday 04 July 2025 18:16:32 +0000 (0:00:00.337) 0:11:53.857 *********** 2025-07-04 18:17:37.449086 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.449093 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.449097 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.449101 | orchestrator | 2025-07-04 18:17:37.449105 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-04 18:17:37.449109 | orchestrator | Friday 04 July 2025 18:16:33 +0000 (0:00:00.299) 0:11:54.157 *********** 2025-07-04 18:17:37.449114 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.449118 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.449122 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.449126 | orchestrator | 2025-07-04 18:17:37.449130 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-04 18:17:37.449134 | orchestrator | Friday 04 July 2025 18:16:33 +0000 (0:00:00.597) 0:11:54.754 *********** 2025-07-04 18:17:37.449138 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.449142 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.449146 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.449150 | orchestrator | 2025-07-04 18:17:37.449154 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-04 18:17:37.449184 | orchestrator | Friday 04 July 2025 18:16:34 +0000 (0:00:00.330) 0:11:55.085 *********** 2025-07-04 18:17:37.449188 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.449192 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.449196 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.449200 | orchestrator | 2025-07-04 18:17:37.449204 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-04 18:17:37.449208 | orchestrator | Friday 04 July 2025 18:16:34 +0000 (0:00:00.346) 0:11:55.432 *********** 2025-07-04 18:17:37.449212 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.449216 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.449220 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.449224 | orchestrator | 2025-07-04 18:17:37.449228 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-07-04 18:17:37.449232 | orchestrator | Friday 04 July 2025 18:16:35 +0000 (0:00:00.869) 0:11:56.301 *********** 2025-07-04 18:17:37.449236 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.449241 | orchestrator | 2025-07-04 18:17:37.449245 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-04 18:17:37.449249 | orchestrator | Friday 04 July 2025 18:16:35 +0000 (0:00:00.556) 0:11:56.858 *********** 2025-07-04 18:17:37.449256 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:17:37.449260 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-04 18:17:37.449264 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-04 18:17:37.449268 | orchestrator | 2025-07-04 18:17:37.449272 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-04 18:17:37.449276 | orchestrator | Friday 04 July 2025 18:16:38 +0000 (0:00:02.228) 0:11:59.087 *********** 2025-07-04 18:17:37.449280 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-04 18:17:37.449284 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-04 18:17:37.449288 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.449292 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-04 18:17:37.449296 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-04 18:17:37.449300 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.449304 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-04 18:17:37.449308 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-04 18:17:37.449312 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.449316 | orchestrator | 2025-07-04 18:17:37.449320 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-07-04 18:17:37.449325 | orchestrator | Friday 04 July 2025 18:16:39 +0000 (0:00:01.506) 0:12:00.593 *********** 2025-07-04 18:17:37.449332 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.449340 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.449344 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.449348 | orchestrator | 2025-07-04 18:17:37.449352 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-07-04 18:17:37.449356 | orchestrator | Friday 04 July 2025 18:16:39 +0000 (0:00:00.324) 0:12:00.918 *********** 2025-07-04 18:17:37.449360 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.449364 | orchestrator | 2025-07-04 18:17:37.449369 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-07-04 18:17:37.449373 | orchestrator | Friday 04 July 2025 18:16:40 +0000 (0:00:00.611) 0:12:01.530 *********** 2025-07-04 18:17:37.449377 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.449381 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.449385 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.449390 | orchestrator | 2025-07-04 18:17:37.449394 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-07-04 18:17:37.449398 | orchestrator | Friday 04 July 2025 18:16:41 +0000 (0:00:01.400) 0:12:02.930 *********** 2025-07-04 18:17:37.449402 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:17:37.449406 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-04 18:17:37.449410 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:17:37.449414 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-04 18:17:37.449418 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:17:37.449422 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-04 18:17:37.449426 | orchestrator | 2025-07-04 18:17:37.449429 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-04 18:17:37.449433 | orchestrator | Friday 04 July 2025 18:16:46 +0000 (0:00:04.323) 0:12:07.254 *********** 2025-07-04 18:17:37.449437 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:17:37.449440 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-04 18:17:37.449444 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:17:37.449448 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-04 18:17:37.449451 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:17:37.449455 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-04 18:17:37.449459 | orchestrator | 2025-07-04 18:17:37.449462 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-04 18:17:37.449466 | orchestrator | Friday 04 July 2025 18:16:48 +0000 (0:00:02.457) 0:12:09.711 *********** 2025-07-04 18:17:37.449470 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-04 18:17:37.449473 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.449477 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-04 18:17:37.449481 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.449484 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-04 18:17:37.449488 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.449495 | orchestrator | 2025-07-04 18:17:37.449499 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-07-04 18:17:37.449503 | orchestrator | Friday 04 July 2025 18:16:49 +0000 (0:00:01.327) 0:12:11.039 *********** 2025-07-04 18:17:37.449506 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-07-04 18:17:37.449513 | orchestrator | 2025-07-04 18:17:37.449516 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-07-04 18:17:37.449520 | orchestrator | Friday 04 July 2025 18:16:50 +0000 (0:00:00.326) 0:12:11.365 *********** 2025-07-04 18:17:37.449524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-04 18:17:37.449528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-04 18:17:37.449532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-04 18:17:37.449536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-04 18:17:37.449539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-04 18:17:37.449543 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.449547 | orchestrator | 2025-07-04 18:17:37.449554 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-07-04 18:17:37.449558 | orchestrator | Friday 04 July 2025 18:16:51 +0000 (0:00:01.193) 0:12:12.559 *********** 2025-07-04 18:17:37.449561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-04 18:17:37.449565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-04 18:17:37.449569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-04 18:17:37.449573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-04 18:17:37.449577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-04 18:17:37.449580 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.449584 | orchestrator | 2025-07-04 18:17:37.449588 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-07-04 18:17:37.449591 | orchestrator | Friday 04 July 2025 18:16:52 +0000 (0:00:00.561) 0:12:13.120 *********** 2025-07-04 18:17:37.449595 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-04 18:17:37.449599 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-04 18:17:37.449603 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-04 18:17:37.449606 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-04 18:17:37.449610 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-04 18:17:37.449614 | orchestrator | 2025-07-04 18:17:37.449617 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-07-04 18:17:37.449621 | orchestrator | Friday 04 July 2025 18:17:23 +0000 (0:00:31.210) 0:12:44.331 *********** 2025-07-04 18:17:37.449628 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.449631 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.449635 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.449639 | orchestrator | 2025-07-04 18:17:37.449643 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-07-04 18:17:37.449646 | orchestrator | Friday 04 July 2025 18:17:23 +0000 (0:00:00.319) 0:12:44.650 *********** 2025-07-04 18:17:37.449650 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.449654 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.449657 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.449661 | orchestrator | 2025-07-04 18:17:37.449665 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-07-04 18:17:37.449668 | orchestrator | Friday 04 July 2025 18:17:23 +0000 (0:00:00.308) 0:12:44.959 *********** 2025-07-04 18:17:37.449672 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.449676 | orchestrator | 2025-07-04 18:17:37.449679 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-07-04 18:17:37.449683 | orchestrator | Friday 04 July 2025 18:17:24 +0000 (0:00:00.825) 0:12:45.784 *********** 2025-07-04 18:17:37.449687 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.449690 | orchestrator | 2025-07-04 18:17:37.449694 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-07-04 18:17:37.449698 | orchestrator | Friday 04 July 2025 18:17:25 +0000 (0:00:00.554) 0:12:46.339 *********** 2025-07-04 18:17:37.449704 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.449707 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.449711 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.449715 | orchestrator | 2025-07-04 18:17:37.449718 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-07-04 18:17:37.449722 | orchestrator | Friday 04 July 2025 18:17:26 +0000 (0:00:01.346) 0:12:47.685 *********** 2025-07-04 18:17:37.449726 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.449729 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.449733 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.449737 | orchestrator | 2025-07-04 18:17:37.449740 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-07-04 18:17:37.449744 | orchestrator | Friday 04 July 2025 18:17:28 +0000 (0:00:01.500) 0:12:49.186 *********** 2025-07-04 18:17:37.449748 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:17:37.449752 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:17:37.449755 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:17:37.449759 | orchestrator | 2025-07-04 18:17:37.449762 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-07-04 18:17:37.449766 | orchestrator | Friday 04 July 2025 18:17:29 +0000 (0:00:01.853) 0:12:51.040 *********** 2025-07-04 18:17:37.449773 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.449777 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.449781 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-04 18:17:37.449784 | orchestrator | 2025-07-04 18:17:37.449788 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-04 18:17:37.449792 | orchestrator | Friday 04 July 2025 18:17:32 +0000 (0:00:02.786) 0:12:53.826 *********** 2025-07-04 18:17:37.449795 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.449799 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.449803 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.449807 | orchestrator | 2025-07-04 18:17:37.449813 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-04 18:17:37.449817 | orchestrator | Friday 04 July 2025 18:17:33 +0000 (0:00:00.377) 0:12:54.204 *********** 2025-07-04 18:17:37.449821 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:17:37.449824 | orchestrator | 2025-07-04 18:17:37.449828 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-04 18:17:37.449832 | orchestrator | Friday 04 July 2025 18:17:33 +0000 (0:00:00.568) 0:12:54.772 *********** 2025-07-04 18:17:37.449836 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.449839 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.449843 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.449847 | orchestrator | 2025-07-04 18:17:37.449853 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-04 18:17:37.449857 | orchestrator | Friday 04 July 2025 18:17:34 +0000 (0:00:00.618) 0:12:55.390 *********** 2025-07-04 18:17:37.449861 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.449864 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:17:37.449868 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:17:37.449872 | orchestrator | 2025-07-04 18:17:37.449875 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-04 18:17:37.449879 | orchestrator | Friday 04 July 2025 18:17:34 +0000 (0:00:00.389) 0:12:55.780 *********** 2025-07-04 18:17:37.449883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:17:37.449887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:17:37.449890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:17:37.449894 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:17:37.449898 | orchestrator | 2025-07-04 18:17:37.449901 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-04 18:17:37.449905 | orchestrator | Friday 04 July 2025 18:17:35 +0000 (0:00:00.627) 0:12:56.408 *********** 2025-07-04 18:17:37.449909 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:17:37.449912 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:17:37.449916 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:17:37.449920 | orchestrator | 2025-07-04 18:17:37.449923 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:17:37.449927 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-07-04 18:17:37.449931 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-07-04 18:17:37.449935 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-07-04 18:17:37.449939 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-07-04 18:17:37.449942 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-07-04 18:17:37.449946 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-07-04 18:17:37.449950 | orchestrator | 2025-07-04 18:17:37.449953 | orchestrator | 2025-07-04 18:17:37.449957 | orchestrator | 2025-07-04 18:17:37.449963 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:17:37.449967 | orchestrator | Friday 04 July 2025 18:17:35 +0000 (0:00:00.253) 0:12:56.662 *********** 2025-07-04 18:17:37.449970 | orchestrator | =============================================================================== 2025-07-04 18:17:37.449974 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------ 156.90s 2025-07-04 18:17:37.449980 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.90s 2025-07-04 18:17:37.449984 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.21s 2025-07-04 18:17:37.449988 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.11s 2025-07-04 18:17:37.449991 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.85s 2025-07-04 18:17:37.449995 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.48s 2025-07-04 18:17:37.449999 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.07s 2025-07-04 18:17:37.450002 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.70s 2025-07-04 18:17:37.450006 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.89s 2025-07-04 18:17:37.450012 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.41s 2025-07-04 18:17:37.450036 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.06s 2025-07-04 18:17:37.450039 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.39s 2025-07-04 18:17:37.450043 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.91s 2025-07-04 18:17:37.450047 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.69s 2025-07-04 18:17:37.450051 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.32s 2025-07-04 18:17:37.450054 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.02s 2025-07-04 18:17:37.450058 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.89s 2025-07-04 18:17:37.450062 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.84s 2025-07-04 18:17:37.450065 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.56s 2025-07-04 18:17:37.450069 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.02s 2025-07-04 18:17:37.450073 | orchestrator | 2025-07-04 18:17:37 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:40.485295 | orchestrator | 2025-07-04 18:17:40 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:17:40.487046 | orchestrator | 2025-07-04 18:17:40 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:40.491540 | orchestrator | 2025-07-04 18:17:40 | INFO  | Task 4ce4b706-0736-4bb0-acca-49d965e838cc is in state SUCCESS 2025-07-04 18:17:40.491582 | orchestrator | 2025-07-04 18:17:40 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:40.494762 | orchestrator | 2025-07-04 18:17:40.494808 | orchestrator | 2025-07-04 18:17:40.494821 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:17:40.494832 | orchestrator | 2025-07-04 18:17:40.494843 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:17:40.494855 | orchestrator | Friday 04 July 2025 18:14:43 +0000 (0:00:00.257) 0:00:00.257 *********** 2025-07-04 18:17:40.494866 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:40.494879 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:17:40.494890 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:17:40.494900 | orchestrator | 2025-07-04 18:17:40.494950 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:17:40.494962 | orchestrator | Friday 04 July 2025 18:14:44 +0000 (0:00:00.288) 0:00:00.546 *********** 2025-07-04 18:17:40.494973 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-07-04 18:17:40.494985 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-07-04 18:17:40.494996 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-07-04 18:17:40.495006 | orchestrator | 2025-07-04 18:17:40.495017 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-07-04 18:17:40.495028 | orchestrator | 2025-07-04 18:17:40.495064 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-04 18:17:40.495076 | orchestrator | Friday 04 July 2025 18:14:44 +0000 (0:00:00.429) 0:00:00.975 *********** 2025-07-04 18:17:40.495087 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:40.495098 | orchestrator | 2025-07-04 18:17:40.495109 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-07-04 18:17:40.495120 | orchestrator | Friday 04 July 2025 18:14:45 +0000 (0:00:00.536) 0:00:01.511 *********** 2025-07-04 18:17:40.495130 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-04 18:17:40.495141 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-04 18:17:40.495151 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-04 18:17:40.495209 | orchestrator | 2025-07-04 18:17:40.495243 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-07-04 18:17:40.495262 | orchestrator | Friday 04 July 2025 18:14:45 +0000 (0:00:00.692) 0:00:02.204 *********** 2025-07-04 18:17:40.495287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:17:40.495329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:17:40.495371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:17:40.495397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:17:40.495438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:17:40.495489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:17:40.495512 | orchestrator | 2025-07-04 18:17:40.495532 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-04 18:17:40.495554 | orchestrator | Friday 04 July 2025 18:14:47 +0000 (0:00:01.652) 0:00:03.856 *********** 2025-07-04 18:17:40.495573 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:40.495590 | orchestrator | 2025-07-04 18:17:40.495603 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-07-04 18:17:40.495615 | orchestrator | Friday 04 July 2025 18:14:47 +0000 (0:00:00.497) 0:00:04.353 *********** 2025-07-04 18:17:40.495640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:17:40.495664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:17:40.495677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:17:40.495712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:17:40.495736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:17:40.495768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:17:40.495788 | orchestrator | 2025-07-04 18:17:40.495806 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-07-04 18:17:40.495825 | orchestrator | Friday 04 July 2025 18:14:50 +0000 (0:00:02.587) 0:00:06.941 *********** 2025-07-04 18:17:40.495846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-04 18:17:40.495873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-04 18:17:40.495886 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:40.495906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-04 18:17:40.495942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-04 18:17:40.495954 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:40.495965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-04 18:17:40.495982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-04 18:17:40.495994 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:40.496005 | orchestrator | 2025-07-04 18:17:40.496016 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-07-04 18:17:40.496027 | orchestrator | Friday 04 July 2025 18:14:52 +0000 (0:00:01.752) 0:00:08.694 *********** 2025-07-04 18:17:40.496045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-04 18:17:40.496069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-04 18:17:40.496081 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:40.496092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-04 18:17:40.496109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-04 18:17:40.496121 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:40.496139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-04 18:17:40.496185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-04 18:17:40.496199 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:40.496209 | orchestrator | 2025-07-04 18:17:40.496220 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-07-04 18:17:40.496231 | orchestrator | Friday 04 July 2025 18:14:53 +0000 (0:00:00.891) 0:00:09.585 *********** 2025-07-04 18:17:40.496242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:17:40.496259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:17:40.496271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:17:40.496304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:17:40.496331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:17:40.496349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:17:40.496361 | orchestrator | 2025-07-04 18:17:40.496379 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-07-04 18:17:40.496390 | orchestrator | Friday 04 July 2025 18:14:55 +0000 (0:00:02.327) 0:00:11.913 *********** 2025-07-04 18:17:40.496401 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:40.496412 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:40.496423 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:40.496434 | orchestrator | 2025-07-04 18:17:40.496444 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-07-04 18:17:40.496455 | orchestrator | Friday 04 July 2025 18:14:59 +0000 (0:00:04.200) 0:00:16.113 *********** 2025-07-04 18:17:40.496465 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:40.496476 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:40.496486 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:40.496497 | orchestrator | 2025-07-04 18:17:40.496507 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-07-04 18:17:40.496518 | orchestrator | Friday 04 July 2025 18:15:01 +0000 (0:00:01.773) 0:00:17.887 *********** 2025-07-04 18:17:40.496539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:17:40.496563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:17:40.496584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-04 18:17:40.496613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:17:40.496657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:17:40.496671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-04 18:17:40.496683 | orchestrator | 2025-07-04 18:17:40.496694 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-04 18:17:40.496705 | orchestrator | Friday 04 July 2025 18:15:03 +0000 (0:00:02.184) 0:00:20.071 *********** 2025-07-04 18:17:40.496716 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:40.496727 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:17:40.496737 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:17:40.496748 | orchestrator | 2025-07-04 18:17:40.496759 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-04 18:17:40.496770 | orchestrator | Friday 04 July 2025 18:15:04 +0000 (0:00:00.560) 0:00:20.632 *********** 2025-07-04 18:17:40.496780 | orchestrator | 2025-07-04 18:17:40.496791 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-04 18:17:40.496802 | orchestrator | Friday 04 July 2025 18:15:04 +0000 (0:00:00.066) 0:00:20.698 *********** 2025-07-04 18:17:40.496812 | orchestrator | 2025-07-04 18:17:40.496823 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-04 18:17:40.496834 | orchestrator | Friday 04 July 2025 18:15:04 +0000 (0:00:00.072) 0:00:20.771 *********** 2025-07-04 18:17:40.496866 | orchestrator | 2025-07-04 18:17:40.496877 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-07-04 18:17:40.496887 | orchestrator | Friday 04 July 2025 18:15:04 +0000 (0:00:00.407) 0:00:21.178 *********** 2025-07-04 18:17:40.496898 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:40.496909 | orchestrator | 2025-07-04 18:17:40.496919 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-07-04 18:17:40.496930 | orchestrator | Friday 04 July 2025 18:15:05 +0000 (0:00:00.359) 0:00:21.537 *********** 2025-07-04 18:17:40.496940 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:17:40.496951 | orchestrator | 2025-07-04 18:17:40.496962 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-07-04 18:17:40.496973 | orchestrator | Friday 04 July 2025 18:15:05 +0000 (0:00:00.210) 0:00:21.748 *********** 2025-07-04 18:17:40.496983 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:40.496999 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:40.497010 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:40.497021 | orchestrator | 2025-07-04 18:17:40.497031 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-07-04 18:17:40.497042 | orchestrator | Friday 04 July 2025 18:16:14 +0000 (0:01:09.010) 0:01:30.759 *********** 2025-07-04 18:17:40.497053 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:40.497063 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:17:40.497074 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:17:40.497084 | orchestrator | 2025-07-04 18:17:40.497095 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-04 18:17:40.497106 | orchestrator | Friday 04 July 2025 18:17:27 +0000 (0:01:12.722) 0:02:43.481 *********** 2025-07-04 18:17:40.497117 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:17:40.497127 | orchestrator | 2025-07-04 18:17:40.497138 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-07-04 18:17:40.497149 | orchestrator | Friday 04 July 2025 18:17:27 +0000 (0:00:00.782) 0:02:44.264 *********** 2025-07-04 18:17:40.497199 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:40.497210 | orchestrator | 2025-07-04 18:17:40.497221 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-07-04 18:17:40.497232 | orchestrator | Friday 04 July 2025 18:17:30 +0000 (0:00:02.348) 0:02:46.612 *********** 2025-07-04 18:17:40.497242 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:17:40.497253 | orchestrator | 2025-07-04 18:17:40.497263 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-07-04 18:17:40.497274 | orchestrator | Friday 04 July 2025 18:17:32 +0000 (0:00:02.185) 0:02:48.797 *********** 2025-07-04 18:17:40.497285 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:40.497295 | orchestrator | 2025-07-04 18:17:40.497306 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-07-04 18:17:40.497317 | orchestrator | Friday 04 July 2025 18:17:35 +0000 (0:00:02.758) 0:02:51.556 *********** 2025-07-04 18:17:40.497328 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:17:40.497339 | orchestrator | 2025-07-04 18:17:40.497356 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:17:40.497382 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 18:17:40.497394 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-04 18:17:40.497405 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-04 18:17:40.497415 | orchestrator | 2025-07-04 18:17:40.497426 | orchestrator | 2025-07-04 18:17:40.497437 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:17:40.497455 | orchestrator | Friday 04 July 2025 18:17:37 +0000 (0:00:02.351) 0:02:53.907 *********** 2025-07-04 18:17:40.497465 | orchestrator | =============================================================================== 2025-07-04 18:17:40.497476 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 72.72s 2025-07-04 18:17:40.497486 | orchestrator | opensearch : Restart opensearch container ------------------------------ 69.01s 2025-07-04 18:17:40.497497 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.20s 2025-07-04 18:17:40.497508 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.76s 2025-07-04 18:17:40.497518 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.59s 2025-07-04 18:17:40.497529 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.35s 2025-07-04 18:17:40.497539 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.35s 2025-07-04 18:17:40.497550 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.33s 2025-07-04 18:17:40.497561 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.19s 2025-07-04 18:17:40.497571 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.18s 2025-07-04 18:17:40.497582 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.77s 2025-07-04 18:17:40.497592 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.75s 2025-07-04 18:17:40.497603 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.65s 2025-07-04 18:17:40.497613 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.89s 2025-07-04 18:17:40.497624 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.78s 2025-07-04 18:17:40.497635 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.69s 2025-07-04 18:17:40.497645 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2025-07-04 18:17:40.497667 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.55s 2025-07-04 18:17:40.497678 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-07-04 18:17:40.497689 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2025-07-04 18:17:43.542881 | orchestrator | 2025-07-04 18:17:43 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:17:43.544456 | orchestrator | 2025-07-04 18:17:43 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:43.544840 | orchestrator | 2025-07-04 18:17:43 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:46.590896 | orchestrator | 2025-07-04 18:17:46 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:17:46.593611 | orchestrator | 2025-07-04 18:17:46 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:46.596006 | orchestrator | 2025-07-04 18:17:46 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:49.648845 | orchestrator | 2025-07-04 18:17:49 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:17:49.650534 | orchestrator | 2025-07-04 18:17:49 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:49.650915 | orchestrator | 2025-07-04 18:17:49 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:52.707350 | orchestrator | 2025-07-04 18:17:52 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:17:52.707459 | orchestrator | 2025-07-04 18:17:52 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:52.707475 | orchestrator | 2025-07-04 18:17:52 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:55.758295 | orchestrator | 2025-07-04 18:17:55 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:17:55.759270 | orchestrator | 2025-07-04 18:17:55 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:55.759291 | orchestrator | 2025-07-04 18:17:55 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:17:58.805392 | orchestrator | 2025-07-04 18:17:58 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:17:58.806189 | orchestrator | 2025-07-04 18:17:58 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state STARTED 2025-07-04 18:17:58.806230 | orchestrator | 2025-07-04 18:17:58 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:01.856566 | orchestrator | 2025-07-04 18:18:01 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:01.860593 | orchestrator | 2025-07-04 18:18:01.860674 | orchestrator | 2025-07-04 18:18:01.860863 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-07-04 18:18:01.860884 | orchestrator | 2025-07-04 18:18:01.860897 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-04 18:18:01.860909 | orchestrator | Friday 04 July 2025 18:14:43 +0000 (0:00:00.109) 0:00:00.109 *********** 2025-07-04 18:18:01.860921 | orchestrator | ok: [localhost] => { 2025-07-04 18:18:01.860935 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-07-04 18:18:01.860947 | orchestrator | } 2025-07-04 18:18:01.860960 | orchestrator | 2025-07-04 18:18:01.860972 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-07-04 18:18:01.860984 | orchestrator | Friday 04 July 2025 18:14:43 +0000 (0:00:00.040) 0:00:00.149 *********** 2025-07-04 18:18:01.860996 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-07-04 18:18:01.861010 | orchestrator | ...ignoring 2025-07-04 18:18:01.861038 | orchestrator | 2025-07-04 18:18:01.861049 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-07-04 18:18:01.861060 | orchestrator | Friday 04 July 2025 18:14:46 +0000 (0:00:02.834) 0:00:02.984 *********** 2025-07-04 18:18:01.861071 | orchestrator | skipping: [localhost] 2025-07-04 18:18:01.861083 | orchestrator | 2025-07-04 18:18:01.861094 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-07-04 18:18:01.861105 | orchestrator | Friday 04 July 2025 18:14:46 +0000 (0:00:00.088) 0:00:03.072 *********** 2025-07-04 18:18:01.861116 | orchestrator | ok: [localhost] 2025-07-04 18:18:01.861128 | orchestrator | 2025-07-04 18:18:01.861161 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:18:01.861172 | orchestrator | 2025-07-04 18:18:01.861184 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:18:01.861195 | orchestrator | Friday 04 July 2025 18:14:46 +0000 (0:00:00.156) 0:00:03.229 *********** 2025-07-04 18:18:01.861206 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:18:01.861217 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:18:01.861228 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:18:01.861240 | orchestrator | 2025-07-04 18:18:01.861251 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:18:01.861262 | orchestrator | Friday 04 July 2025 18:14:47 +0000 (0:00:00.316) 0:00:03.545 *********** 2025-07-04 18:18:01.861273 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-04 18:18:01.861285 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-04 18:18:01.861296 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-04 18:18:01.861307 | orchestrator | 2025-07-04 18:18:01.861318 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-04 18:18:01.861329 | orchestrator | 2025-07-04 18:18:01.861340 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-04 18:18:01.861377 | orchestrator | Friday 04 July 2025 18:14:48 +0000 (0:00:00.812) 0:00:04.358 *********** 2025-07-04 18:18:01.861389 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-04 18:18:01.861422 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-04 18:18:01.861457 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-04 18:18:01.861488 | orchestrator | 2025-07-04 18:18:01.861508 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-04 18:18:01.861526 | orchestrator | Friday 04 July 2025 18:14:48 +0000 (0:00:00.447) 0:00:04.805 *********** 2025-07-04 18:18:01.861565 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:18:01.861584 | orchestrator | 2025-07-04 18:18:01.861602 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-07-04 18:18:01.861620 | orchestrator | Friday 04 July 2025 18:14:49 +0000 (0:00:00.566) 0:00:05.372 *********** 2025-07-04 18:18:01.861669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-04 18:18:01.861706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-04 18:18:01.861745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-04 18:18:01.861767 | orchestrator | 2025-07-04 18:18:01.861800 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-07-04 18:18:01.861813 | orchestrator | Friday 04 July 2025 18:14:52 +0000 (0:00:03.730) 0:00:09.102 *********** 2025-07-04 18:18:01.861825 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:18:01.861836 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.861847 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.861858 | orchestrator | 2025-07-04 18:18:01.861868 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-07-04 18:18:01.861879 | orchestrator | Friday 04 July 2025 18:14:53 +0000 (0:00:00.809) 0:00:09.912 *********** 2025-07-04 18:18:01.861890 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.861901 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.861912 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:18:01.861923 | orchestrator | 2025-07-04 18:18:01.861949 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-07-04 18:18:01.861961 | orchestrator | Friday 04 July 2025 18:14:55 +0000 (0:00:01.506) 0:00:11.418 *********** 2025-07-04 18:18:01.861979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-04 18:18:01.862008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-04 18:18:01.862100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-04 18:18:01.862225 | orchestrator | 2025-07-04 18:18:01.862253 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-07-04 18:18:01.862275 | orchestrator | Friday 04 July 2025 18:15:00 +0000 (0:00:04.860) 0:00:16.279 *********** 2025-07-04 18:18:01.862286 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.862297 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.862308 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:18:01.862319 | orchestrator | 2025-07-04 18:18:01.862330 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-07-04 18:18:01.862341 | orchestrator | Friday 04 July 2025 18:15:01 +0000 (0:00:01.265) 0:00:17.544 *********** 2025-07-04 18:18:01.862352 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:18:01.862362 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:18:01.862373 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:18:01.862384 | orchestrator | 2025-07-04 18:18:01.862395 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-04 18:18:01.862405 | orchestrator | Friday 04 July 2025 18:15:06 +0000 (0:00:05.085) 0:00:22.630 *********** 2025-07-04 18:18:01.862416 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:18:01.862427 | orchestrator | 2025-07-04 18:18:01.862438 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-04 18:18:01.862449 | orchestrator | Friday 04 July 2025 18:15:07 +0000 (0:00:00.955) 0:00:23.586 *********** 2025-07-04 18:18:01.862475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:18:01.862496 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.862514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:18:01.862536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:18:01.862555 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.862567 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:18:01.862577 | orchestrator | 2025-07-04 18:18:01.862601 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-04 18:18:01.862612 | orchestrator | Friday 04 July 2025 18:15:10 +0000 (0:00:03.430) 0:00:27.016 *********** 2025-07-04 18:18:01.862630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:18:01.862649 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:18:01.862726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:18:01.862763 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.862784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:18:01.862805 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.862823 | orchestrator | 2025-07-04 18:18:01.862850 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-04 18:18:01.862868 | orchestrator | Friday 04 July 2025 18:15:13 +0000 (0:00:02.939) 0:00:29.956 *********** 2025-07-04 18:18:01.862900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:18:01.862940 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.862962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:18:01.862984 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.863011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-04 18:18:01.863037 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:18:01.863050 | orchestrator | 2025-07-04 18:18:01.863060 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-07-04 18:18:01.863072 | orchestrator | Friday 04 July 2025 18:15:16 +0000 (0:00:02.676) 0:00:32.633 *********** 2025-07-04 18:18:01.863091 | orchestrator | ch2025-07-04 18:18:01 | INFO  | Task 71c3bfb9-cb9c-463c-b873-a5612a53c28b is in state SUCCESS 2025-07-04 18:18:01.863104 | orchestrator | anged: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-04 18:18:01.863124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-04 18:18:01.863348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-04 18:18:01.863374 | orchestrator | 2025-07-04 18:18:01.863386 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-07-04 18:18:01.863398 | orchestrator | Friday 04 July 2025 18:15:19 +0000 (0:00:03.522) 0:00:36.155 *********** 2025-07-04 18:18:01.863421 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:18:01.863433 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:18:01.863444 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:18:01.863455 | orchestrator | 2025-07-04 18:18:01.863466 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-07-04 18:18:01.863476 | orchestrator | Friday 04 July 2025 18:15:20 +0000 (0:00:00.977) 0:00:37.133 *********** 2025-07-04 18:18:01.863487 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:18:01.863499 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:18:01.863510 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:18:01.863521 | orchestrator | 2025-07-04 18:18:01.863532 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-07-04 18:18:01.863543 | orchestrator | Friday 04 July 2025 18:15:21 +0000 (0:00:00.337) 0:00:37.471 *********** 2025-07-04 18:18:01.863554 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:18:01.863572 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:18:01.863583 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:18:01.863594 | orchestrator | 2025-07-04 18:18:01.863605 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-07-04 18:18:01.863616 | orchestrator | Friday 04 July 2025 18:15:21 +0000 (0:00:00.412) 0:00:37.884 *********** 2025-07-04 18:18:01.863628 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-07-04 18:18:01.863640 | orchestrator | ...ignoring 2025-07-04 18:18:01.863651 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-07-04 18:18:01.863662 | orchestrator | ...ignoring 2025-07-04 18:18:01.863673 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-07-04 18:18:01.863693 | orchestrator | ...ignoring 2025-07-04 18:18:01.863703 | orchestrator | 2025-07-04 18:18:01.863714 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-07-04 18:18:01.863725 | orchestrator | Friday 04 July 2025 18:15:32 +0000 (0:00:11.041) 0:00:48.926 *********** 2025-07-04 18:18:01.863736 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:18:01.863746 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:18:01.863757 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:18:01.863768 | orchestrator | 2025-07-04 18:18:01.863779 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-07-04 18:18:01.863790 | orchestrator | Friday 04 July 2025 18:15:33 +0000 (0:00:00.651) 0:00:49.577 *********** 2025-07-04 18:18:01.863800 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:18:01.863822 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.863838 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.863854 | orchestrator | 2025-07-04 18:18:01.863877 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-07-04 18:18:01.863899 | orchestrator | Friday 04 July 2025 18:15:33 +0000 (0:00:00.454) 0:00:50.031 *********** 2025-07-04 18:18:01.863916 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:18:01.863933 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.863950 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.863967 | orchestrator | 2025-07-04 18:18:01.863982 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-07-04 18:18:01.863998 | orchestrator | Friday 04 July 2025 18:15:34 +0000 (0:00:00.415) 0:00:50.447 *********** 2025-07-04 18:18:01.864014 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:18:01.864031 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.864057 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.864073 | orchestrator | 2025-07-04 18:18:01.864090 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-07-04 18:18:01.864105 | orchestrator | Friday 04 July 2025 18:15:34 +0000 (0:00:00.420) 0:00:50.868 *********** 2025-07-04 18:18:01.864121 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:18:01.864166 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:18:01.864182 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:18:01.864197 | orchestrator | 2025-07-04 18:18:01.864212 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-07-04 18:18:01.864228 | orchestrator | Friday 04 July 2025 18:15:35 +0000 (0:00:00.844) 0:00:51.712 *********** 2025-07-04 18:18:01.864244 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:18:01.864259 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.864275 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.864290 | orchestrator | 2025-07-04 18:18:01.864306 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-04 18:18:01.864323 | orchestrator | Friday 04 July 2025 18:15:35 +0000 (0:00:00.419) 0:00:52.131 *********** 2025-07-04 18:18:01.864339 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.864355 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.864372 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-07-04 18:18:01.864388 | orchestrator | 2025-07-04 18:18:01.864404 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-07-04 18:18:01.864420 | orchestrator | Friday 04 July 2025 18:15:36 +0000 (0:00:00.392) 0:00:52.524 *********** 2025-07-04 18:18:01.864437 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:18:01.864454 | orchestrator | 2025-07-04 18:18:01.864470 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-07-04 18:18:01.864487 | orchestrator | Friday 04 July 2025 18:15:46 +0000 (0:00:10.181) 0:01:02.705 *********** 2025-07-04 18:18:01.864503 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:18:01.864518 | orchestrator | 2025-07-04 18:18:01.864530 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-04 18:18:01.864561 | orchestrator | Friday 04 July 2025 18:15:46 +0000 (0:00:00.138) 0:01:02.843 *********** 2025-07-04 18:18:01.864579 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:18:01.864595 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.864611 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.864628 | orchestrator | 2025-07-04 18:18:01.864644 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-07-04 18:18:01.864657 | orchestrator | Friday 04 July 2025 18:15:47 +0000 (0:00:01.215) 0:01:04.059 *********** 2025-07-04 18:18:01.864667 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:18:01.864677 | orchestrator | 2025-07-04 18:18:01.864687 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-07-04 18:18:01.864696 | orchestrator | Friday 04 July 2025 18:15:56 +0000 (0:00:08.900) 0:01:12.959 *********** 2025-07-04 18:18:01.864706 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:18:01.864716 | orchestrator | 2025-07-04 18:18:01.864725 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-07-04 18:18:01.864735 | orchestrator | Friday 04 July 2025 18:15:58 +0000 (0:00:01.612) 0:01:14.572 *********** 2025-07-04 18:18:01.864744 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:18:01.864754 | orchestrator | 2025-07-04 18:18:01.864763 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-07-04 18:18:01.864780 | orchestrator | Friday 04 July 2025 18:16:00 +0000 (0:00:02.463) 0:01:17.036 *********** 2025-07-04 18:18:01.864790 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:18:01.864799 | orchestrator | 2025-07-04 18:18:01.864809 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-07-04 18:18:01.864818 | orchestrator | Friday 04 July 2025 18:16:00 +0000 (0:00:00.133) 0:01:17.170 *********** 2025-07-04 18:18:01.864828 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:18:01.864837 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.864846 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.864863 | orchestrator | 2025-07-04 18:18:01.864877 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-07-04 18:18:01.864894 | orchestrator | Friday 04 July 2025 18:16:01 +0000 (0:00:00.551) 0:01:17.721 *********** 2025-07-04 18:18:01.864909 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:18:01.864925 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-04 18:18:01.864942 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:18:01.864958 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:18:01.864975 | orchestrator | 2025-07-04 18:18:01.864985 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-04 18:18:01.864994 | orchestrator | skipping: no hosts matched 2025-07-04 18:18:01.865010 | orchestrator | 2025-07-04 18:18:01.865033 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-04 18:18:01.865054 | orchestrator | 2025-07-04 18:18:01.865070 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-04 18:18:01.865085 | orchestrator | Friday 04 July 2025 18:16:01 +0000 (0:00:00.327) 0:01:18.049 *********** 2025-07-04 18:18:01.865100 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:18:01.865115 | orchestrator | 2025-07-04 18:18:01.865168 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-04 18:18:01.865187 | orchestrator | Friday 04 July 2025 18:16:21 +0000 (0:00:19.558) 0:01:37.607 *********** 2025-07-04 18:18:01.865203 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:18:01.865218 | orchestrator | 2025-07-04 18:18:01.865234 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-04 18:18:01.865247 | orchestrator | Friday 04 July 2025 18:16:42 +0000 (0:00:20.658) 0:01:58.265 *********** 2025-07-04 18:18:01.865263 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:18:01.865276 | orchestrator | 2025-07-04 18:18:01.865290 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-04 18:18:01.865304 | orchestrator | 2025-07-04 18:18:01.865319 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-04 18:18:01.865348 | orchestrator | Friday 04 July 2025 18:16:44 +0000 (0:00:02.517) 0:02:00.783 *********** 2025-07-04 18:18:01.865364 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:18:01.865379 | orchestrator | 2025-07-04 18:18:01.865428 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-04 18:18:01.865459 | orchestrator | Friday 04 July 2025 18:17:09 +0000 (0:00:25.187) 0:02:25.970 *********** 2025-07-04 18:18:01.865475 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:18:01.865485 | orchestrator | 2025-07-04 18:18:01.865495 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-04 18:18:01.865504 | orchestrator | Friday 04 July 2025 18:17:26 +0000 (0:00:16.726) 0:02:42.697 *********** 2025-07-04 18:18:01.865513 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:18:01.865523 | orchestrator | 2025-07-04 18:18:01.865533 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-04 18:18:01.865543 | orchestrator | 2025-07-04 18:18:01.865552 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-04 18:18:01.865562 | orchestrator | Friday 04 July 2025 18:17:29 +0000 (0:00:02.838) 0:02:45.536 *********** 2025-07-04 18:18:01.865571 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:18:01.865581 | orchestrator | 2025-07-04 18:18:01.865590 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-04 18:18:01.865600 | orchestrator | Friday 04 July 2025 18:17:41 +0000 (0:00:11.968) 0:02:57.504 *********** 2025-07-04 18:18:01.865610 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:18:01.865619 | orchestrator | 2025-07-04 18:18:01.865629 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-04 18:18:01.865639 | orchestrator | Friday 04 July 2025 18:17:45 +0000 (0:00:04.563) 0:03:02.068 *********** 2025-07-04 18:18:01.865648 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:18:01.865658 | orchestrator | 2025-07-04 18:18:01.865673 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-04 18:18:01.865686 | orchestrator | 2025-07-04 18:18:01.865697 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-04 18:18:01.865706 | orchestrator | Friday 04 July 2025 18:17:48 +0000 (0:00:02.522) 0:03:04.591 *********** 2025-07-04 18:18:01.865716 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:18:01.865726 | orchestrator | 2025-07-04 18:18:01.865735 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-07-04 18:18:01.865745 | orchestrator | Friday 04 July 2025 18:17:48 +0000 (0:00:00.513) 0:03:05.105 *********** 2025-07-04 18:18:01.865754 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.865764 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.865774 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:18:01.865784 | orchestrator | 2025-07-04 18:18:01.865793 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-07-04 18:18:01.865803 | orchestrator | Friday 04 July 2025 18:17:51 +0000 (0:00:02.388) 0:03:07.493 *********** 2025-07-04 18:18:01.865813 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.865823 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.865832 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:18:01.865841 | orchestrator | 2025-07-04 18:18:01.865851 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-07-04 18:18:01.865860 | orchestrator | Friday 04 July 2025 18:17:53 +0000 (0:00:02.221) 0:03:09.714 *********** 2025-07-04 18:18:01.865870 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.865879 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.865888 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:18:01.865898 | orchestrator | 2025-07-04 18:18:01.865915 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-07-04 18:18:01.865925 | orchestrator | Friday 04 July 2025 18:17:55 +0000 (0:00:02.034) 0:03:11.749 *********** 2025-07-04 18:18:01.865935 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.865952 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.865962 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:18:01.865972 | orchestrator | 2025-07-04 18:18:01.865981 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-07-04 18:18:01.865990 | orchestrator | Friday 04 July 2025 18:17:57 +0000 (0:00:02.147) 0:03:13.896 *********** 2025-07-04 18:18:01.866000 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:18:01.866010 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:18:01.866064 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:18:01.866075 | orchestrator | 2025-07-04 18:18:01.866085 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-04 18:18:01.866094 | orchestrator | Friday 04 July 2025 18:18:01 +0000 (0:00:03.344) 0:03:17.241 *********** 2025-07-04 18:18:01.866104 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:18:01.866114 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:18:01.866123 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:18:01.866157 | orchestrator | 2025-07-04 18:18:01.866176 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:18:01.866194 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-04 18:18:01.866212 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-07-04 18:18:01.866226 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-04 18:18:01.866236 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-04 18:18:01.866246 | orchestrator | 2025-07-04 18:18:01.866256 | orchestrator | 2025-07-04 18:18:01.866265 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:18:01.866274 | orchestrator | Friday 04 July 2025 18:18:01 +0000 (0:00:00.240) 0:03:17.482 *********** 2025-07-04 18:18:01.866284 | orchestrator | =============================================================================== 2025-07-04 18:18:01.866293 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 44.75s 2025-07-04 18:18:01.866312 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 37.39s 2025-07-04 18:18:01.866323 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.97s 2025-07-04 18:18:01.866332 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.04s 2025-07-04 18:18:01.866342 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.18s 2025-07-04 18:18:01.866351 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.90s 2025-07-04 18:18:01.866361 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.36s 2025-07-04 18:18:01.866370 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.09s 2025-07-04 18:18:01.866380 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.86s 2025-07-04 18:18:01.866393 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.56s 2025-07-04 18:18:01.866407 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.73s 2025-07-04 18:18:01.866417 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.52s 2025-07-04 18:18:01.866427 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.43s 2025-07-04 18:18:01.866440 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.34s 2025-07-04 18:18:01.866456 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.94s 2025-07-04 18:18:01.866472 | orchestrator | Check MariaDB service --------------------------------------------------- 2.83s 2025-07-04 18:18:01.866499 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.68s 2025-07-04 18:18:01.866514 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.52s 2025-07-04 18:18:01.866531 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.46s 2025-07-04 18:18:01.866546 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.39s 2025-07-04 18:18:01.866564 | orchestrator | 2025-07-04 18:18:01 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:04.912316 | orchestrator | 2025-07-04 18:18:04 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:04.914215 | orchestrator | 2025-07-04 18:18:04 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:04.915963 | orchestrator | 2025-07-04 18:18:04 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:04.916197 | orchestrator | 2025-07-04 18:18:04 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:07.951328 | orchestrator | 2025-07-04 18:18:07 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:07.952006 | orchestrator | 2025-07-04 18:18:07 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:07.953629 | orchestrator | 2025-07-04 18:18:07 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:07.953679 | orchestrator | 2025-07-04 18:18:07 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:11.006292 | orchestrator | 2025-07-04 18:18:11 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:11.008400 | orchestrator | 2025-07-04 18:18:11 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:11.010842 | orchestrator | 2025-07-04 18:18:11 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:11.010892 | orchestrator | 2025-07-04 18:18:11 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:14.050625 | orchestrator | 2025-07-04 18:18:14 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:14.053348 | orchestrator | 2025-07-04 18:18:14 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:14.055582 | orchestrator | 2025-07-04 18:18:14 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:14.056395 | orchestrator | 2025-07-04 18:18:14 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:17.110561 | orchestrator | 2025-07-04 18:18:17 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:17.112491 | orchestrator | 2025-07-04 18:18:17 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:17.114207 | orchestrator | 2025-07-04 18:18:17 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:17.114263 | orchestrator | 2025-07-04 18:18:17 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:20.161857 | orchestrator | 2025-07-04 18:18:20 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:20.161959 | orchestrator | 2025-07-04 18:18:20 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:20.161974 | orchestrator | 2025-07-04 18:18:20 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:20.161986 | orchestrator | 2025-07-04 18:18:20 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:23.197826 | orchestrator | 2025-07-04 18:18:23 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:23.197960 | orchestrator | 2025-07-04 18:18:23 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:23.198605 | orchestrator | 2025-07-04 18:18:23 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:23.199582 | orchestrator | 2025-07-04 18:18:23 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:26.241999 | orchestrator | 2025-07-04 18:18:26 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:26.243596 | orchestrator | 2025-07-04 18:18:26 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:26.245851 | orchestrator | 2025-07-04 18:18:26 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:26.245916 | orchestrator | 2025-07-04 18:18:26 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:29.290832 | orchestrator | 2025-07-04 18:18:29 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:29.294072 | orchestrator | 2025-07-04 18:18:29 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:29.297221 | orchestrator | 2025-07-04 18:18:29 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:29.297294 | orchestrator | 2025-07-04 18:18:29 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:32.333171 | orchestrator | 2025-07-04 18:18:32 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:32.333809 | orchestrator | 2025-07-04 18:18:32 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:32.334919 | orchestrator | 2025-07-04 18:18:32 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:32.334944 | orchestrator | 2025-07-04 18:18:32 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:35.376648 | orchestrator | 2025-07-04 18:18:35 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:35.381247 | orchestrator | 2025-07-04 18:18:35 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:35.381320 | orchestrator | 2025-07-04 18:18:35 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:35.381331 | orchestrator | 2025-07-04 18:18:35 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:38.431458 | orchestrator | 2025-07-04 18:18:38 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:38.432244 | orchestrator | 2025-07-04 18:18:38 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:38.434082 | orchestrator | 2025-07-04 18:18:38 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:38.434909 | orchestrator | 2025-07-04 18:18:38 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:41.488490 | orchestrator | 2025-07-04 18:18:41 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:41.488600 | orchestrator | 2025-07-04 18:18:41 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:41.489224 | orchestrator | 2025-07-04 18:18:41 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:41.489251 | orchestrator | 2025-07-04 18:18:41 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:44.528825 | orchestrator | 2025-07-04 18:18:44 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:44.530578 | orchestrator | 2025-07-04 18:18:44 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:44.531417 | orchestrator | 2025-07-04 18:18:44 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:44.531915 | orchestrator | 2025-07-04 18:18:44 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:47.586941 | orchestrator | 2025-07-04 18:18:47 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:47.589149 | orchestrator | 2025-07-04 18:18:47 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:47.589188 | orchestrator | 2025-07-04 18:18:47 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:47.589201 | orchestrator | 2025-07-04 18:18:47 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:50.637192 | orchestrator | 2025-07-04 18:18:50 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:50.639921 | orchestrator | 2025-07-04 18:18:50 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:50.642352 | orchestrator | 2025-07-04 18:18:50 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:50.642603 | orchestrator | 2025-07-04 18:18:50 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:53.697177 | orchestrator | 2025-07-04 18:18:53 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:53.699375 | orchestrator | 2025-07-04 18:18:53 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:53.701783 | orchestrator | 2025-07-04 18:18:53 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:53.701837 | orchestrator | 2025-07-04 18:18:53 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:56.749983 | orchestrator | 2025-07-04 18:18:56 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:56.750575 | orchestrator | 2025-07-04 18:18:56 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:56.751650 | orchestrator | 2025-07-04 18:18:56 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:56.751693 | orchestrator | 2025-07-04 18:18:56 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:18:59.793879 | orchestrator | 2025-07-04 18:18:59 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:18:59.796714 | orchestrator | 2025-07-04 18:18:59 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:18:59.798098 | orchestrator | 2025-07-04 18:18:59 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:18:59.798156 | orchestrator | 2025-07-04 18:18:59 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:02.849500 | orchestrator | 2025-07-04 18:19:02 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:02.852310 | orchestrator | 2025-07-04 18:19:02 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:02.855764 | orchestrator | 2025-07-04 18:19:02 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:02.855817 | orchestrator | 2025-07-04 18:19:02 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:05.890337 | orchestrator | 2025-07-04 18:19:05 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:05.892141 | orchestrator | 2025-07-04 18:19:05 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:05.893770 | orchestrator | 2025-07-04 18:19:05 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:05.893799 | orchestrator | 2025-07-04 18:19:05 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:08.949126 | orchestrator | 2025-07-04 18:19:08 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:08.950274 | orchestrator | 2025-07-04 18:19:08 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:08.951918 | orchestrator | 2025-07-04 18:19:08 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:08.951956 | orchestrator | 2025-07-04 18:19:08 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:12.002288 | orchestrator | 2025-07-04 18:19:11 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:12.005486 | orchestrator | 2025-07-04 18:19:12 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:12.008173 | orchestrator | 2025-07-04 18:19:12 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:12.008244 | orchestrator | 2025-07-04 18:19:12 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:15.050758 | orchestrator | 2025-07-04 18:19:15 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:15.052324 | orchestrator | 2025-07-04 18:19:15 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:15.054329 | orchestrator | 2025-07-04 18:19:15 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:15.054376 | orchestrator | 2025-07-04 18:19:15 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:18.103091 | orchestrator | 2025-07-04 18:19:18 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:18.104772 | orchestrator | 2025-07-04 18:19:18 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:18.106411 | orchestrator | 2025-07-04 18:19:18 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:18.106444 | orchestrator | 2025-07-04 18:19:18 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:21.148708 | orchestrator | 2025-07-04 18:19:21 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:21.151352 | orchestrator | 2025-07-04 18:19:21 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:21.153202 | orchestrator | 2025-07-04 18:19:21 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:21.153239 | orchestrator | 2025-07-04 18:19:21 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:24.193263 | orchestrator | 2025-07-04 18:19:24 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:24.196359 | orchestrator | 2025-07-04 18:19:24 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:24.198497 | orchestrator | 2025-07-04 18:19:24 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:24.198670 | orchestrator | 2025-07-04 18:19:24 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:27.247663 | orchestrator | 2025-07-04 18:19:27 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:27.249910 | orchestrator | 2025-07-04 18:19:27 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:27.252227 | orchestrator | 2025-07-04 18:19:27 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:27.252286 | orchestrator | 2025-07-04 18:19:27 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:30.301384 | orchestrator | 2025-07-04 18:19:30 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:30.304627 | orchestrator | 2025-07-04 18:19:30 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:30.307513 | orchestrator | 2025-07-04 18:19:30 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:30.307687 | orchestrator | 2025-07-04 18:19:30 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:33.356644 | orchestrator | 2025-07-04 18:19:33 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:33.358448 | orchestrator | 2025-07-04 18:19:33 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:33.361037 | orchestrator | 2025-07-04 18:19:33 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:33.361147 | orchestrator | 2025-07-04 18:19:33 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:36.403126 | orchestrator | 2025-07-04 18:19:36 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:36.405033 | orchestrator | 2025-07-04 18:19:36 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:36.406334 | orchestrator | 2025-07-04 18:19:36 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:36.406362 | orchestrator | 2025-07-04 18:19:36 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:39.448616 | orchestrator | 2025-07-04 18:19:39 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:39.450350 | orchestrator | 2025-07-04 18:19:39 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:39.452256 | orchestrator | 2025-07-04 18:19:39 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:39.452312 | orchestrator | 2025-07-04 18:19:39 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:42.491659 | orchestrator | 2025-07-04 18:19:42 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:42.493210 | orchestrator | 2025-07-04 18:19:42 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:42.495112 | orchestrator | 2025-07-04 18:19:42 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:42.495171 | orchestrator | 2025-07-04 18:19:42 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:45.547712 | orchestrator | 2025-07-04 18:19:45 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:45.549339 | orchestrator | 2025-07-04 18:19:45 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:45.553218 | orchestrator | 2025-07-04 18:19:45 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:45.553294 | orchestrator | 2025-07-04 18:19:45 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:48.605259 | orchestrator | 2025-07-04 18:19:48 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:48.606422 | orchestrator | 2025-07-04 18:19:48 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state STARTED 2025-07-04 18:19:48.607580 | orchestrator | 2025-07-04 18:19:48 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:48.607643 | orchestrator | 2025-07-04 18:19:48 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:51.656326 | orchestrator | 2025-07-04 18:19:51 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:51.658275 | orchestrator | 2025-07-04 18:19:51 | INFO  | Task 83caf5cb-3d11-4e8b-b1f4-9779fece63c0 is in state SUCCESS 2025-07-04 18:19:51.659932 | orchestrator | 2025-07-04 18:19:51.659971 | orchestrator | 2025-07-04 18:19:51.659983 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-07-04 18:19:51.659996 | orchestrator | 2025-07-04 18:19:51.660129 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-04 18:19:51.660145 | orchestrator | Friday 04 July 2025 18:17:40 +0000 (0:00:00.610) 0:00:00.611 *********** 2025-07-04 18:19:51.660157 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:19:51.660169 | orchestrator | 2025-07-04 18:19:51.660180 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-04 18:19:51.660191 | orchestrator | Friday 04 July 2025 18:17:41 +0000 (0:00:00.617) 0:00:01.228 *********** 2025-07-04 18:19:51.660202 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.660239 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.660259 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.660279 | orchestrator | 2025-07-04 18:19:51.660299 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-04 18:19:51.660318 | orchestrator | Friday 04 July 2025 18:17:41 +0000 (0:00:00.617) 0:00:01.846 *********** 2025-07-04 18:19:51.660339 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.660376 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.660388 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.660398 | orchestrator | 2025-07-04 18:19:51.660409 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-04 18:19:51.660423 | orchestrator | Friday 04 July 2025 18:17:42 +0000 (0:00:00.266) 0:00:02.112 *********** 2025-07-04 18:19:51.660441 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.660459 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.660471 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.660481 | orchestrator | 2025-07-04 18:19:51.660492 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-04 18:19:51.660506 | orchestrator | Friday 04 July 2025 18:17:43 +0000 (0:00:00.786) 0:00:02.899 *********** 2025-07-04 18:19:51.660523 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.660540 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.660557 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.660575 | orchestrator | 2025-07-04 18:19:51.660594 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-04 18:19:51.660614 | orchestrator | Friday 04 July 2025 18:17:43 +0000 (0:00:00.310) 0:00:03.209 *********** 2025-07-04 18:19:51.660631 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.660649 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.660668 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.660687 | orchestrator | 2025-07-04 18:19:51.660704 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-04 18:19:51.660723 | orchestrator | Friday 04 July 2025 18:17:43 +0000 (0:00:00.289) 0:00:03.499 *********** 2025-07-04 18:19:51.660741 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.660759 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.660770 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.660787 | orchestrator | 2025-07-04 18:19:51.660804 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-04 18:19:51.660823 | orchestrator | Friday 04 July 2025 18:17:43 +0000 (0:00:00.304) 0:00:03.803 *********** 2025-07-04 18:19:51.660842 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.660860 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.660880 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.660899 | orchestrator | 2025-07-04 18:19:51.660911 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-04 18:19:51.660962 | orchestrator | Friday 04 July 2025 18:17:44 +0000 (0:00:00.542) 0:00:04.345 *********** 2025-07-04 18:19:51.660982 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.660994 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.661004 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.661015 | orchestrator | 2025-07-04 18:19:51.661026 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-04 18:19:51.661064 | orchestrator | Friday 04 July 2025 18:17:44 +0000 (0:00:00.303) 0:00:04.649 *********** 2025-07-04 18:19:51.661076 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-04 18:19:51.661087 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-04 18:19:51.661098 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-04 18:19:51.661110 | orchestrator | 2025-07-04 18:19:51.661128 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-04 18:19:51.661148 | orchestrator | Friday 04 July 2025 18:17:45 +0000 (0:00:00.614) 0:00:05.264 *********** 2025-07-04 18:19:51.661166 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.661183 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.661200 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.661218 | orchestrator | 2025-07-04 18:19:51.661237 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-04 18:19:51.661255 | orchestrator | Friday 04 July 2025 18:17:45 +0000 (0:00:00.416) 0:00:05.681 *********** 2025-07-04 18:19:51.661273 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-04 18:19:51.661288 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-04 18:19:51.661299 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-04 18:19:51.661310 | orchestrator | 2025-07-04 18:19:51.661321 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-04 18:19:51.661332 | orchestrator | Friday 04 July 2025 18:17:47 +0000 (0:00:02.168) 0:00:07.849 *********** 2025-07-04 18:19:51.661342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-04 18:19:51.661353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-04 18:19:51.661364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-04 18:19:51.661375 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.661386 | orchestrator | 2025-07-04 18:19:51.661396 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-04 18:19:51.661423 | orchestrator | Friday 04 July 2025 18:17:48 +0000 (0:00:00.392) 0:00:08.242 *********** 2025-07-04 18:19:51.661437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.661452 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.661463 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.661474 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.661484 | orchestrator | 2025-07-04 18:19:51.661503 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-04 18:19:51.661514 | orchestrator | Friday 04 July 2025 18:17:49 +0000 (0:00:00.785) 0:00:09.028 *********** 2025-07-04 18:19:51.661527 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.661550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.661562 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.661573 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.661584 | orchestrator | 2025-07-04 18:19:51.661595 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-04 18:19:51.661606 | orchestrator | Friday 04 July 2025 18:17:49 +0000 (0:00:00.150) 0:00:09.178 *********** 2025-07-04 18:19:51.661619 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cc98b65f5b3b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-04 18:17:46.478418', 'end': '2025-07-04 18:17:46.522291', 'delta': '0:00:00.043873', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cc98b65f5b3b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-07-04 18:19:51.661635 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ac2816ff6098', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-04 18:17:47.220714', 'end': '2025-07-04 18:17:47.259014', 'delta': '0:00:00.038300', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ac2816ff6098'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-07-04 18:19:51.661655 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b5c155a92f18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-04 18:17:47.788271', 'end': '2025-07-04 18:17:47.834765', 'delta': '0:00:00.046494', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b5c155a92f18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-07-04 18:19:51.661667 | orchestrator | 2025-07-04 18:19:51.661678 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-04 18:19:51.661689 | orchestrator | Friday 04 July 2025 18:17:49 +0000 (0:00:00.336) 0:00:09.515 *********** 2025-07-04 18:19:51.661706 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.661722 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.661733 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.661743 | orchestrator | 2025-07-04 18:19:51.661754 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-04 18:19:51.661765 | orchestrator | Friday 04 July 2025 18:17:50 +0000 (0:00:00.408) 0:00:09.924 *********** 2025-07-04 18:19:51.661775 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-07-04 18:19:51.661786 | orchestrator | 2025-07-04 18:19:51.661796 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-04 18:19:51.661807 | orchestrator | Friday 04 July 2025 18:17:51 +0000 (0:00:01.756) 0:00:11.680 *********** 2025-07-04 18:19:51.661817 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.661828 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.661839 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.661849 | orchestrator | 2025-07-04 18:19:51.661860 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-04 18:19:51.661870 | orchestrator | Friday 04 July 2025 18:17:52 +0000 (0:00:00.289) 0:00:11.970 *********** 2025-07-04 18:19:51.661881 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.661891 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.661902 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.661913 | orchestrator | 2025-07-04 18:19:51.661923 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-04 18:19:51.661934 | orchestrator | Friday 04 July 2025 18:17:52 +0000 (0:00:00.388) 0:00:12.359 *********** 2025-07-04 18:19:51.661944 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.661955 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.661965 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.661978 | orchestrator | 2025-07-04 18:19:51.661998 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-04 18:19:51.662103 | orchestrator | Friday 04 July 2025 18:17:52 +0000 (0:00:00.466) 0:00:12.826 *********** 2025-07-04 18:19:51.662123 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.662134 | orchestrator | 2025-07-04 18:19:51.662145 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-04 18:19:51.662156 | orchestrator | Friday 04 July 2025 18:17:53 +0000 (0:00:00.121) 0:00:12.947 *********** 2025-07-04 18:19:51.662166 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.662177 | orchestrator | 2025-07-04 18:19:51.662187 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-04 18:19:51.662198 | orchestrator | Friday 04 July 2025 18:17:53 +0000 (0:00:00.268) 0:00:13.216 *********** 2025-07-04 18:19:51.662208 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.662219 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.662229 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.662240 | orchestrator | 2025-07-04 18:19:51.662251 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-04 18:19:51.662262 | orchestrator | Friday 04 July 2025 18:17:53 +0000 (0:00:00.296) 0:00:13.513 *********** 2025-07-04 18:19:51.662272 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.662283 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.662293 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.662304 | orchestrator | 2025-07-04 18:19:51.662315 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-04 18:19:51.662325 | orchestrator | Friday 04 July 2025 18:17:53 +0000 (0:00:00.329) 0:00:13.842 *********** 2025-07-04 18:19:51.662336 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.662346 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.662357 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.662368 | orchestrator | 2025-07-04 18:19:51.662378 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-04 18:19:51.662398 | orchestrator | Friday 04 July 2025 18:17:54 +0000 (0:00:00.528) 0:00:14.370 *********** 2025-07-04 18:19:51.662409 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.662420 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.662430 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.662441 | orchestrator | 2025-07-04 18:19:51.662452 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-04 18:19:51.662463 | orchestrator | Friday 04 July 2025 18:17:54 +0000 (0:00:00.315) 0:00:14.686 *********** 2025-07-04 18:19:51.662473 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.662484 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.662494 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.662505 | orchestrator | 2025-07-04 18:19:51.662515 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-04 18:19:51.662526 | orchestrator | Friday 04 July 2025 18:17:55 +0000 (0:00:00.319) 0:00:15.005 *********** 2025-07-04 18:19:51.662537 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.662547 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.662558 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.662569 | orchestrator | 2025-07-04 18:19:51.662579 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-04 18:19:51.662598 | orchestrator | Friday 04 July 2025 18:17:55 +0000 (0:00:00.326) 0:00:15.332 *********** 2025-07-04 18:19:51.662609 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.662619 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.662630 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.662641 | orchestrator | 2025-07-04 18:19:51.662652 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-04 18:19:51.662662 | orchestrator | Friday 04 July 2025 18:17:55 +0000 (0:00:00.533) 0:00:15.866 *********** 2025-07-04 18:19:51.662680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36-osd--block--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36', 'dm-uuid-LVM-B3Y3vVt13oq7W12qJO9i0i6uep7VTFvfcGfEXbvVV6V6O7RTne1vNFxTHUmPjFQE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50c65579--7f86--5010--a824--2221e6b8d3f0-osd--block--50c65579--7f86--5010--a824--2221e6b8d3f0', 'dm-uuid-LVM-ZG3XWcaXjrAvYnnejqBIYJ0ciDWIs1Csg0ixG4tV2ItMKloRzAL7LZn9V6kamgcP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.662840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36-osd--block--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YCSxTc-76yS-14eo-2p4B-V1C4-kRp3-c6rJuf', 'scsi-0QEMU_QEMU_HARDDISK_f1ee158f-8183-4691-b988-cdb0b3746d63', 'scsi-SQEMU_QEMU_HARDDISK_f1ee158f-8183-4691-b988-cdb0b3746d63'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.662862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--50c65579--7f86--5010--a824--2221e6b8d3f0-osd--block--50c65579--7f86--5010--a824--2221e6b8d3f0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HYJGol-8JGG-VzqY-74tM-pxx0-LLRX-2M1TEb', 'scsi-0QEMU_QEMU_HARDDISK_cc10544f-afe1-4b17-ac35-d479dbd44023', 'scsi-SQEMU_QEMU_HARDDISK_cc10544f-afe1-4b17-ac35-d479dbd44023'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.662879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c11b362--ac03--5009--be6f--11a9ef5f18dc-osd--block--0c11b362--ac03--5009--be6f--11a9ef5f18dc', 'dm-uuid-LVM-85sUfP606lq7Q3qlcfR1IFsiywW560yhDPu2j2sIi3qGNQFOf68DzrNES5iiIM0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c678ea0e-f232-4db4-9458-94e4077f665f', 'scsi-SQEMU_QEMU_HARDDISK_c678ea0e-f232-4db4-9458-94e4077f665f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.662903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b396848d--3790--5c5a--8f8a--1e47b4270a43-osd--block--b396848d--3790--5c5a--8f8a--1e47b4270a43', 'dm-uuid-LVM-xMS0sPfwvCdF5iP1LiDfuNYivGcUuqa86TUlwLmuLCuTRsoOe4i32c1w7HKJVWmz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-25-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.662932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.662998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.663009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.663020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.663128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part1', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part14', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part15', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part16', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.663148 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.663208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0c11b362--ac03--5009--be6f--11a9ef5f18dc-osd--block--0c11b362--ac03--5009--be6f--11a9ef5f18dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lAj6H2-GbpL-tiIF-gWFe-eMOf-QtJU-zCspVm', 'scsi-0QEMU_QEMU_HARDDISK_22af1316-5bc1-4af9-ac7a-65db3b57cabb', 'scsi-SQEMU_QEMU_HARDDISK_22af1316-5bc1-4af9-ac7a-65db3b57cabb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.663221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b396848d--3790--5c5a--8f8a--1e47b4270a43-osd--block--b396848d--3790--5c5a--8f8a--1e47b4270a43'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SSRNip-mMzR-tZor-FjCQ-hyeQ-c1ou-teOgYN', 'scsi-0QEMU_QEMU_HARDDISK_f2e9dc75-50de-4afc-bb89-e69d1400c858', 'scsi-SQEMU_QEMU_HARDDISK_f2e9dc75-50de-4afc-bb89-e69d1400c858'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.663281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9dcda133-58d2-4853-8afe-c4a876875c80', 'scsi-SQEMU_QEMU_HARDDISK_9dcda133-58d2-4853-8afe-c4a876875c80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.663295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-25-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.663306 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.663403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6-osd--block--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6', 'dm-uuid-LVM-9yNJ8algFQCe0Lclf5Jy1KC3jWf3L15em4DweRrWAFPNdfMxHldV7he5T2KUXFML'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.663424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--38a85088--e19d--56c7--801b--f45e1c084bd2-osd--block--38a85088--e19d--56c7--801b--f45e1c084bd2', 'dm-uuid-LVM-hIEJ3Y0TZU0RuTYN1UttFgCpmaDl5TJ7jivGPkU5G3lWJgmFJZiQcEWy2AJm3Cbl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.663436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.663479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.663492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.663503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.663522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.663533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.663545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.663556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-04 18:19:51.663582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.663604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6-osd--block--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fdDzjd-9T9M-cqTu-GJl1-h4RP-aBJ6-IHudZ4', 'scsi-0QEMU_QEMU_HARDDISK_cc9ae976-88cb-4b21-9449-d8985ff12d4f', 'scsi-SQEMU_QEMU_HARDDISK_cc9ae976-88cb-4b21-9449-d8985ff12d4f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.663616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--38a85088--e19d--56c7--801b--f45e1c084bd2-osd--block--38a85088--e19d--56c7--801b--f45e1c084bd2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YiSio8-0igJ-vSex-UFpF-3jw7-YaBM-7i3yTR', 'scsi-0QEMU_QEMU_HARDDISK_d957e37b-6f48-487c-9682-d56dbc604f5a', 'scsi-SQEMU_QEMU_HARDDISK_d957e37b-6f48-487c-9682-d56dbc604f5a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.663628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36831ba3-00a3-40d1-8c8d-d5688ce5b92e', 'scsi-SQEMU_QEMU_HARDDISK_36831ba3-00a3-40d1-8c8d-d5688ce5b92e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.663645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-24-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-04 18:19:51.663655 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.663665 | orchestrator | 2025-07-04 18:19:51.663675 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-04 18:19:51.663685 | orchestrator | Friday 04 July 2025 18:17:56 +0000 (0:00:00.611) 0:00:16.478 *********** 2025-07-04 18:19:51.663700 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36-osd--block--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36', 'dm-uuid-LVM-B3Y3vVt13oq7W12qJO9i0i6uep7VTFvfcGfEXbvVV6V6O7RTne1vNFxTHUmPjFQE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50c65579--7f86--5010--a824--2221e6b8d3f0-osd--block--50c65579--7f86--5010--a824--2221e6b8d3f0', 'dm-uuid-LVM-ZG3XWcaXjrAvYnnejqBIYJ0ciDWIs1Csg0ixG4tV2ItMKloRzAL7LZn9V6kamgcP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663739 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663749 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663765 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663780 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663796 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663807 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0c11b362--ac03--5009--be6f--11a9ef5f18dc-osd--block--0c11b362--ac03--5009--be6f--11a9ef5f18dc', 'dm-uuid-LVM-85sUfP606lq7Q3qlcfR1IFsiywW560yhDPu2j2sIi3qGNQFOf68DzrNES5iiIM0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663817 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b396848d--3790--5c5a--8f8a--1e47b4270a43-osd--block--b396848d--3790--5c5a--8f8a--1e47b4270a43', 'dm-uuid-LVM-xMS0sPfwvCdF5iP1LiDfuNYivGcUuqa86TUlwLmuLCuTRsoOe4i32c1w7HKJVWmz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663827 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663845 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663860 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663876 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663886 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab42ac05-5a2a-4b10-b0be-14fcaa2726cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663919 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663936 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36-osd--block--32d6ac83--1783--5cc7--8f93--7bc92d6b2f36'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YCSxTc-76yS-14eo-2p4B-V1C4-kRp3-c6rJuf', 'scsi-0QEMU_QEMU_HARDDISK_f1ee158f-8183-4691-b988-cdb0b3746d63', 'scsi-SQEMU_QEMU_HARDDISK_f1ee158f-8183-4691-b988-cdb0b3746d63'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663948 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--50c65579--7f86--5010--a824--2221e6b8d3f0-osd--block--50c65579--7f86--5010--a824--2221e6b8d3f0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HYJGol-8JGG-VzqY-74tM-pxx0-LLRX-2M1TEb', 'scsi-0QEMU_QEMU_HARDDISK_cc10544f-afe1-4b17-ac35-d479dbd44023', 'scsi-SQEMU_QEMU_HARDDISK_cc10544f-afe1-4b17-ac35-d479dbd44023'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663958 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c678ea0e-f232-4db4-9458-94e4077f665f', 'scsi-SQEMU_QEMU_HARDDISK_c678ea0e-f232-4db4-9458-94e4077f665f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.663984 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664007 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-25-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664018 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664048 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664062 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.664086 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part1', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part14', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part15', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part16', 'scsi-SQEMU_QEMU_HARDDISK_654ae738-db23-4503-810d-da49c3934f2e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664105 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0c11b362--ac03--5009--be6f--11a9ef5f18dc-osd--block--0c11b362--ac03--5009--be6f--11a9ef5f18dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lAj6H2-GbpL-tiIF-gWFe-eMOf-QtJU-zCspVm', 'scsi-0QEMU_QEMU_HARDDISK_22af1316-5bc1-4af9-ac7a-65db3b57cabb', 'scsi-SQEMU_QEMU_HARDDISK_22af1316-5bc1-4af9-ac7a-65db3b57cabb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664115 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b396848d--3790--5c5a--8f8a--1e47b4270a43-osd--block--b396848d--3790--5c5a--8f8a--1e47b4270a43'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SSRNip-mMzR-tZor-FjCQ-hyeQ-c1ou-teOgYN', 'scsi-0QEMU_QEMU_HARDDISK_f2e9dc75-50de-4afc-bb89-e69d1400c858', 'scsi-SQEMU_QEMU_HARDDISK_f2e9dc75-50de-4afc-bb89-e69d1400c858'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664325 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9dcda133-58d2-4853-8afe-c4a876875c80', 'scsi-SQEMU_QEMU_HARDDISK_9dcda133-58d2-4853-8afe-c4a876875c80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664339 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-25-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664356 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.664371 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6-osd--block--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6', 'dm-uuid-LVM-9yNJ8algFQCe0Lclf5Jy1KC3jWf3L15em4DweRrWAFPNdfMxHldV7he5T2KUXFML'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664382 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--38a85088--e19d--56c7--801b--f45e1c084bd2-osd--block--38a85088--e19d--56c7--801b--f45e1c084bd2', 'dm-uuid-LVM-hIEJ3Y0TZU0RuTYN1UttFgCpmaDl5TJ7jivGPkU5G3lWJgmFJZiQcEWy2AJm3Cbl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664392 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664402 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664419 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664430 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664450 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664461 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664471 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664481 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664503 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5fbf5c6-81a8-4539-96cc-19329771a958-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664520 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6-osd--block--a98224fe--e18a--5ddc--b2f0--6ffdc4d7e2d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fdDzjd-9T9M-cqTu-GJl1-h4RP-aBJ6-IHudZ4', 'scsi-0QEMU_QEMU_HARDDISK_cc9ae976-88cb-4b21-9449-d8985ff12d4f', 'scsi-SQEMU_QEMU_HARDDISK_cc9ae976-88cb-4b21-9449-d8985ff12d4f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664531 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--38a85088--e19d--56c7--801b--f45e1c084bd2-osd--block--38a85088--e19d--56c7--801b--f45e1c084bd2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YiSio8-0igJ-vSex-UFpF-3jw7-YaBM-7i3yTR', 'scsi-0QEMU_QEMU_HARDDISK_d957e37b-6f48-487c-9682-d56dbc604f5a', 'scsi-SQEMU_QEMU_HARDDISK_d957e37b-6f48-487c-9682-d56dbc604f5a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664546 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36831ba3-00a3-40d1-8c8d-d5688ce5b92e', 'scsi-SQEMU_QEMU_HARDDISK_36831ba3-00a3-40d1-8c8d-d5688ce5b92e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664556 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-04-17-24-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-04 18:19:51.664572 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.664582 | orchestrator | 2025-07-04 18:19:51.664591 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-04 18:19:51.664601 | orchestrator | Friday 04 July 2025 18:17:57 +0000 (0:00:00.617) 0:00:17.095 *********** 2025-07-04 18:19:51.664611 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.664621 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.664630 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.664640 | orchestrator | 2025-07-04 18:19:51.664649 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-04 18:19:51.664659 | orchestrator | Friday 04 July 2025 18:17:57 +0000 (0:00:00.736) 0:00:17.832 *********** 2025-07-04 18:19:51.664668 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.664678 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.664692 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.664701 | orchestrator | 2025-07-04 18:19:51.664711 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-04 18:19:51.664721 | orchestrator | Friday 04 July 2025 18:17:58 +0000 (0:00:00.521) 0:00:18.354 *********** 2025-07-04 18:19:51.664731 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.664740 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.664750 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.664759 | orchestrator | 2025-07-04 18:19:51.664769 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-04 18:19:51.664778 | orchestrator | Friday 04 July 2025 18:17:59 +0000 (0:00:00.701) 0:00:19.055 *********** 2025-07-04 18:19:51.664788 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.664797 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.664807 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.664817 | orchestrator | 2025-07-04 18:19:51.664826 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-04 18:19:51.664836 | orchestrator | Friday 04 July 2025 18:17:59 +0000 (0:00:00.290) 0:00:19.346 *********** 2025-07-04 18:19:51.664845 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.664855 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.664864 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.664874 | orchestrator | 2025-07-04 18:19:51.664883 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-04 18:19:51.664893 | orchestrator | Friday 04 July 2025 18:17:59 +0000 (0:00:00.452) 0:00:19.799 *********** 2025-07-04 18:19:51.664903 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.664912 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.664922 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.664931 | orchestrator | 2025-07-04 18:19:51.664941 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-04 18:19:51.664950 | orchestrator | Friday 04 July 2025 18:18:00 +0000 (0:00:00.555) 0:00:20.354 *********** 2025-07-04 18:19:51.664959 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-04 18:19:51.664969 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-04 18:19:51.664979 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-04 18:19:51.664989 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-04 18:19:51.664998 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-04 18:19:51.665008 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-04 18:19:51.665024 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-04 18:19:51.665055 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-04 18:19:51.665065 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-04 18:19:51.665075 | orchestrator | 2025-07-04 18:19:51.665084 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-04 18:19:51.665094 | orchestrator | Friday 04 July 2025 18:18:01 +0000 (0:00:00.855) 0:00:21.210 *********** 2025-07-04 18:19:51.665103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-04 18:19:51.665113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-04 18:19:51.665123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-04 18:19:51.665132 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-04 18:19:51.665142 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-04 18:19:51.665151 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-04 18:19:51.665161 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.665170 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.665180 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-04 18:19:51.665189 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-04 18:19:51.665199 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-04 18:19:51.665213 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.665223 | orchestrator | 2025-07-04 18:19:51.665312 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-04 18:19:51.665323 | orchestrator | Friday 04 July 2025 18:18:01 +0000 (0:00:00.358) 0:00:21.568 *********** 2025-07-04 18:19:51.665332 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:19:51.665342 | orchestrator | 2025-07-04 18:19:51.665352 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-04 18:19:51.665363 | orchestrator | Friday 04 July 2025 18:18:02 +0000 (0:00:00.766) 0:00:22.334 *********** 2025-07-04 18:19:51.665373 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.665382 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.665392 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.665401 | orchestrator | 2025-07-04 18:19:51.665411 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-04 18:19:51.665420 | orchestrator | Friday 04 July 2025 18:18:02 +0000 (0:00:00.313) 0:00:22.647 *********** 2025-07-04 18:19:51.665430 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.665439 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.665449 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.665458 | orchestrator | 2025-07-04 18:19:51.665468 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-04 18:19:51.665477 | orchestrator | Friday 04 July 2025 18:18:03 +0000 (0:00:00.317) 0:00:22.965 *********** 2025-07-04 18:19:51.665486 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.665496 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.665505 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:19:51.665515 | orchestrator | 2025-07-04 18:19:51.665524 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-04 18:19:51.665534 | orchestrator | Friday 04 July 2025 18:18:03 +0000 (0:00:00.321) 0:00:23.286 *********** 2025-07-04 18:19:51.665543 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.665553 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.665562 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.665572 | orchestrator | 2025-07-04 18:19:51.665587 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-04 18:19:51.665597 | orchestrator | Friday 04 July 2025 18:18:03 +0000 (0:00:00.586) 0:00:23.872 *********** 2025-07-04 18:19:51.665606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:19:51.665624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:19:51.665633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:19:51.665643 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.665653 | orchestrator | 2025-07-04 18:19:51.665662 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-04 18:19:51.665672 | orchestrator | Friday 04 July 2025 18:18:04 +0000 (0:00:00.374) 0:00:24.247 *********** 2025-07-04 18:19:51.665681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:19:51.665690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:19:51.665700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:19:51.665709 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.665719 | orchestrator | 2025-07-04 18:19:51.665728 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-04 18:19:51.665738 | orchestrator | Friday 04 July 2025 18:18:04 +0000 (0:00:00.362) 0:00:24.610 *********** 2025-07-04 18:19:51.665747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-04 18:19:51.665757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-04 18:19:51.665766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-04 18:19:51.665776 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.665785 | orchestrator | 2025-07-04 18:19:51.665795 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-04 18:19:51.665804 | orchestrator | Friday 04 July 2025 18:18:05 +0000 (0:00:00.367) 0:00:24.978 *********** 2025-07-04 18:19:51.665813 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:19:51.665823 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:19:51.665833 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:19:51.665842 | orchestrator | 2025-07-04 18:19:51.665851 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-04 18:19:51.665861 | orchestrator | Friday 04 July 2025 18:18:05 +0000 (0:00:00.314) 0:00:25.293 *********** 2025-07-04 18:19:51.665870 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-04 18:19:51.665880 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-04 18:19:51.665889 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-04 18:19:51.665899 | orchestrator | 2025-07-04 18:19:51.665908 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-04 18:19:51.665918 | orchestrator | Friday 04 July 2025 18:18:06 +0000 (0:00:00.587) 0:00:25.880 *********** 2025-07-04 18:19:51.665927 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-04 18:19:51.665937 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-04 18:19:51.665947 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-04 18:19:51.665956 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-04 18:19:51.665966 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-04 18:19:51.665976 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-04 18:19:51.665985 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-04 18:19:51.665994 | orchestrator | 2025-07-04 18:19:51.666004 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-04 18:19:51.666217 | orchestrator | Friday 04 July 2025 18:18:06 +0000 (0:00:00.973) 0:00:26.854 *********** 2025-07-04 18:19:51.666237 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-04 18:19:51.666247 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-04 18:19:51.666257 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-04 18:19:51.666266 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-04 18:19:51.666284 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-04 18:19:51.666294 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-04 18:19:51.666304 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-04 18:19:51.666313 | orchestrator | 2025-07-04 18:19:51.666323 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-07-04 18:19:51.666332 | orchestrator | Friday 04 July 2025 18:18:08 +0000 (0:00:02.002) 0:00:28.856 *********** 2025-07-04 18:19:51.666341 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:19:51.666351 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:19:51.666360 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-07-04 18:19:51.666370 | orchestrator | 2025-07-04 18:19:51.666380 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-07-04 18:19:51.666389 | orchestrator | Friday 04 July 2025 18:18:09 +0000 (0:00:00.435) 0:00:29.292 *********** 2025-07-04 18:19:51.666399 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-04 18:19:51.666420 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-04 18:19:51.666430 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-04 18:19:51.666440 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-04 18:19:51.666451 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-04 18:19:51.666460 | orchestrator | 2025-07-04 18:19:51.666469 | orchestrator | TASK [generate keys] *********************************************************** 2025-07-04 18:19:51.666479 | orchestrator | Friday 04 July 2025 18:18:54 +0000 (0:00:45.533) 0:01:14.826 *********** 2025-07-04 18:19:51.666488 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666498 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666507 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666516 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666526 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666535 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666544 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-07-04 18:19:51.666553 | orchestrator | 2025-07-04 18:19:51.666563 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-07-04 18:19:51.666572 | orchestrator | Friday 04 July 2025 18:19:20 +0000 (0:00:25.495) 0:01:40.321 *********** 2025-07-04 18:19:51.666581 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666597 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666606 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666616 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666625 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666635 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666644 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-04 18:19:51.666654 | orchestrator | 2025-07-04 18:19:51.666669 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-07-04 18:19:51.666679 | orchestrator | Friday 04 July 2025 18:19:32 +0000 (0:00:12.052) 0:01:52.374 *********** 2025-07-04 18:19:51.666688 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666698 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-04 18:19:51.666707 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-04 18:19:51.666717 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666726 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-04 18:19:51.666736 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-04 18:19:51.666746 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666755 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-04 18:19:51.666764 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-04 18:19:51.666773 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666783 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-04 18:19:51.666792 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-04 18:19:51.666801 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666811 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-04 18:19:51.666820 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-04 18:19:51.666834 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-04 18:19:51.666843 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-04 18:19:51.666853 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-04 18:19:51.666862 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-07-04 18:19:51.666872 | orchestrator | 2025-07-04 18:19:51.666881 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:19:51.666891 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-07-04 18:19:51.666902 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-04 18:19:51.666911 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-04 18:19:51.666921 | orchestrator | 2025-07-04 18:19:51.666931 | orchestrator | 2025-07-04 18:19:51.666940 | orchestrator | 2025-07-04 18:19:51.666950 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:19:51.666959 | orchestrator | Friday 04 July 2025 18:19:49 +0000 (0:00:17.460) 0:02:09.835 *********** 2025-07-04 18:19:51.666975 | orchestrator | =============================================================================== 2025-07-04 18:19:51.666984 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.53s 2025-07-04 18:19:51.666994 | orchestrator | generate keys ---------------------------------------------------------- 25.50s 2025-07-04 18:19:51.667003 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.46s 2025-07-04 18:19:51.667013 | orchestrator | get keys from monitors ------------------------------------------------- 12.05s 2025-07-04 18:19:51.667022 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.17s 2025-07-04 18:19:51.667100 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.00s 2025-07-04 18:19:51.667119 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.76s 2025-07-04 18:19:51.667131 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.97s 2025-07-04 18:19:51.667140 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.86s 2025-07-04 18:19:51.667150 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.79s 2025-07-04 18:19:51.667159 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.79s 2025-07-04 18:19:51.667169 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.77s 2025-07-04 18:19:51.667178 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.74s 2025-07-04 18:19:51.667187 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.70s 2025-07-04 18:19:51.667197 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.62s 2025-07-04 18:19:51.667206 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.62s 2025-07-04 18:19:51.667215 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.62s 2025-07-04 18:19:51.667225 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.61s 2025-07-04 18:19:51.667234 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.61s 2025-07-04 18:19:51.667244 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.59s 2025-07-04 18:19:51.667259 | orchestrator | 2025-07-04 18:19:51 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:51.667270 | orchestrator | 2025-07-04 18:19:51 | INFO  | Task 5567f1e3-be37-493c-b329-c696f211457c is in state STARTED 2025-07-04 18:19:51.667280 | orchestrator | 2025-07-04 18:19:51 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:54.717880 | orchestrator | 2025-07-04 18:19:54 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:54.719481 | orchestrator | 2025-07-04 18:19:54 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:54.723232 | orchestrator | 2025-07-04 18:19:54 | INFO  | Task 5567f1e3-be37-493c-b329-c696f211457c is in state STARTED 2025-07-04 18:19:54.723290 | orchestrator | 2025-07-04 18:19:54 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:19:57.768618 | orchestrator | 2025-07-04 18:19:57 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state STARTED 2025-07-04 18:19:57.770688 | orchestrator | 2025-07-04 18:19:57 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:19:57.772816 | orchestrator | 2025-07-04 18:19:57 | INFO  | Task 5567f1e3-be37-493c-b329-c696f211457c is in state STARTED 2025-07-04 18:19:57.772877 | orchestrator | 2025-07-04 18:19:57 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:00.814355 | orchestrator | 2025-07-04 18:20:00 | INFO  | Task ca79526d-d4b9-4aa2-a036-a2c2b3d8701a is in state SUCCESS 2025-07-04 18:20:00.816078 | orchestrator | 2025-07-04 18:20:00.816123 | orchestrator | 2025-07-04 18:20:00.816133 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:20:00.816163 | orchestrator | 2025-07-04 18:20:00.816184 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:20:00.816192 | orchestrator | Friday 04 July 2025 18:18:05 +0000 (0:00:00.271) 0:00:00.271 *********** 2025-07-04 18:20:00.816197 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:00.816205 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:00.816212 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:00.816220 | orchestrator | 2025-07-04 18:20:00.816227 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:20:00.816235 | orchestrator | Friday 04 July 2025 18:18:06 +0000 (0:00:00.296) 0:00:00.568 *********** 2025-07-04 18:20:00.816242 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-07-04 18:20:00.816250 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-07-04 18:20:00.816256 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-07-04 18:20:00.816263 | orchestrator | 2025-07-04 18:20:00.816270 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-07-04 18:20:00.816277 | orchestrator | 2025-07-04 18:20:00.816284 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-04 18:20:00.816291 | orchestrator | Friday 04 July 2025 18:18:06 +0000 (0:00:00.416) 0:00:00.985 *********** 2025-07-04 18:20:00.816298 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:20:00.816306 | orchestrator | 2025-07-04 18:20:00.816312 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-07-04 18:20:00.816319 | orchestrator | Friday 04 July 2025 18:18:07 +0000 (0:00:00.498) 0:00:01.484 *********** 2025-07-04 18:20:00.816330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-04 18:20:00.816358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-04 18:20:00.816374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-04 18:20:00.816385 | orchestrator | 2025-07-04 18:20:00.816391 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-07-04 18:20:00.816398 | orchestrator | Friday 04 July 2025 18:18:08 +0000 (0:00:01.403) 0:00:02.887 *********** 2025-07-04 18:20:00.816405 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:00.816412 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:00.816419 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:00.816426 | orchestrator | 2025-07-04 18:20:00.816433 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-04 18:20:00.816440 | orchestrator | Friday 04 July 2025 18:18:09 +0000 (0:00:00.548) 0:00:03.435 *********** 2025-07-04 18:20:00.816453 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-04 18:20:00.816463 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-04 18:20:00.816469 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-07-04 18:20:00.816476 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-07-04 18:20:00.816482 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-07-04 18:20:00.816488 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-07-04 18:20:00.816495 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-07-04 18:20:00.816502 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-07-04 18:20:00.816509 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-04 18:20:00.816516 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-04 18:20:00.816523 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-07-04 18:20:00.816530 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-07-04 18:20:00.816537 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-07-04 18:20:00.816544 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-07-04 18:20:00.816551 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-07-04 18:20:00.816558 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-07-04 18:20:00.816566 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-04 18:20:00.816573 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-04 18:20:00.816580 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-07-04 18:20:00.816588 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-07-04 18:20:00.816595 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-07-04 18:20:00.816602 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-07-04 18:20:00.816609 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-07-04 18:20:00.816616 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-07-04 18:20:00.816624 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-07-04 18:20:00.816633 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-07-04 18:20:00.816640 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-07-04 18:20:00.816652 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-07-04 18:20:00.816660 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-07-04 18:20:00.816667 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-07-04 18:20:00.816675 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-07-04 18:20:00.816683 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-07-04 18:20:00.816691 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-07-04 18:20:00.816700 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-07-04 18:20:00.816707 | orchestrator | 2025-07-04 18:20:00.816715 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-04 18:20:00.816722 | orchestrator | Friday 04 July 2025 18:18:09 +0000 (0:00:00.819) 0:00:04.255 *********** 2025-07-04 18:20:00.816731 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:00.816738 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:00.816745 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:00.816752 | orchestrator | 2025-07-04 18:20:00.816759 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-04 18:20:00.816767 | orchestrator | Friday 04 July 2025 18:18:10 +0000 (0:00:00.322) 0:00:04.578 *********** 2025-07-04 18:20:00.816778 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.816786 | orchestrator | 2025-07-04 18:20:00.816797 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-04 18:20:00.816803 | orchestrator | Friday 04 July 2025 18:18:10 +0000 (0:00:00.140) 0:00:04.719 *********** 2025-07-04 18:20:00.816811 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.816818 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.816826 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.816833 | orchestrator | 2025-07-04 18:20:00.816840 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-04 18:20:00.816847 | orchestrator | Friday 04 July 2025 18:18:10 +0000 (0:00:00.493) 0:00:05.212 *********** 2025-07-04 18:20:00.816854 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:00.816862 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:00.816870 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:00.816877 | orchestrator | 2025-07-04 18:20:00.816884 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-04 18:20:00.816891 | orchestrator | Friday 04 July 2025 18:18:11 +0000 (0:00:00.312) 0:00:05.525 *********** 2025-07-04 18:20:00.816898 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.816904 | orchestrator | 2025-07-04 18:20:00.816911 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-04 18:20:00.816918 | orchestrator | Friday 04 July 2025 18:18:11 +0000 (0:00:00.129) 0:00:05.654 *********** 2025-07-04 18:20:00.816926 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.816932 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.816939 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.816946 | orchestrator | 2025-07-04 18:20:00.816953 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-04 18:20:00.816960 | orchestrator | Friday 04 July 2025 18:18:11 +0000 (0:00:00.282) 0:00:05.937 *********** 2025-07-04 18:20:00.816974 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:00.816981 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:00.816988 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:00.816994 | orchestrator | 2025-07-04 18:20:00.817001 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-04 18:20:00.817007 | orchestrator | Friday 04 July 2025 18:18:11 +0000 (0:00:00.292) 0:00:06.230 *********** 2025-07-04 18:20:00.817013 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817020 | orchestrator | 2025-07-04 18:20:00.817152 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-04 18:20:00.817161 | orchestrator | Friday 04 July 2025 18:18:12 +0000 (0:00:00.316) 0:00:06.546 *********** 2025-07-04 18:20:00.817168 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817174 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.817181 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.817187 | orchestrator | 2025-07-04 18:20:00.817194 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-04 18:20:00.817201 | orchestrator | Friday 04 July 2025 18:18:12 +0000 (0:00:00.299) 0:00:06.845 *********** 2025-07-04 18:20:00.817207 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:00.817213 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:00.817220 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:00.817226 | orchestrator | 2025-07-04 18:20:00.817233 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-04 18:20:00.817240 | orchestrator | Friday 04 July 2025 18:18:12 +0000 (0:00:00.315) 0:00:07.161 *********** 2025-07-04 18:20:00.817246 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817253 | orchestrator | 2025-07-04 18:20:00.817260 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-04 18:20:00.817267 | orchestrator | Friday 04 July 2025 18:18:13 +0000 (0:00:00.165) 0:00:07.326 *********** 2025-07-04 18:20:00.817274 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817281 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.817287 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.817294 | orchestrator | 2025-07-04 18:20:00.817300 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-04 18:20:00.817306 | orchestrator | Friday 04 July 2025 18:18:13 +0000 (0:00:00.273) 0:00:07.600 *********** 2025-07-04 18:20:00.817312 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:00.817318 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:00.817324 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:00.817330 | orchestrator | 2025-07-04 18:20:00.817337 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-04 18:20:00.817344 | orchestrator | Friday 04 July 2025 18:18:13 +0000 (0:00:00.508) 0:00:08.109 *********** 2025-07-04 18:20:00.817350 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817356 | orchestrator | 2025-07-04 18:20:00.817363 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-04 18:20:00.817370 | orchestrator | Friday 04 July 2025 18:18:13 +0000 (0:00:00.132) 0:00:08.241 *********** 2025-07-04 18:20:00.817376 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817383 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.817390 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.817397 | orchestrator | 2025-07-04 18:20:00.817404 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-04 18:20:00.817411 | orchestrator | Friday 04 July 2025 18:18:14 +0000 (0:00:00.313) 0:00:08.555 *********** 2025-07-04 18:20:00.817418 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:00.817425 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:00.817432 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:00.817438 | orchestrator | 2025-07-04 18:20:00.817445 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-04 18:20:00.817452 | orchestrator | Friday 04 July 2025 18:18:14 +0000 (0:00:00.300) 0:00:08.855 *********** 2025-07-04 18:20:00.817468 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817475 | orchestrator | 2025-07-04 18:20:00.817482 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-04 18:20:00.817489 | orchestrator | Friday 04 July 2025 18:18:14 +0000 (0:00:00.120) 0:00:08.976 *********** 2025-07-04 18:20:00.817496 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817503 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.817510 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.817518 | orchestrator | 2025-07-04 18:20:00.817525 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-04 18:20:00.817541 | orchestrator | Friday 04 July 2025 18:18:15 +0000 (0:00:00.500) 0:00:09.476 *********** 2025-07-04 18:20:00.817548 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:00.817561 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:00.817568 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:00.817576 | orchestrator | 2025-07-04 18:20:00.817582 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-04 18:20:00.817589 | orchestrator | Friday 04 July 2025 18:18:15 +0000 (0:00:00.348) 0:00:09.825 *********** 2025-07-04 18:20:00.817595 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817602 | orchestrator | 2025-07-04 18:20:00.817609 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-04 18:20:00.817616 | orchestrator | Friday 04 July 2025 18:18:15 +0000 (0:00:00.127) 0:00:09.952 *********** 2025-07-04 18:20:00.817623 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817630 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.817637 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.817644 | orchestrator | 2025-07-04 18:20:00.817651 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-04 18:20:00.817658 | orchestrator | Friday 04 July 2025 18:18:15 +0000 (0:00:00.289) 0:00:10.242 *********** 2025-07-04 18:20:00.817665 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:00.817671 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:00.817678 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:00.817685 | orchestrator | 2025-07-04 18:20:00.817692 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-04 18:20:00.817699 | orchestrator | Friday 04 July 2025 18:18:16 +0000 (0:00:00.311) 0:00:10.554 *********** 2025-07-04 18:20:00.817706 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817713 | orchestrator | 2025-07-04 18:20:00.817720 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-04 18:20:00.817726 | orchestrator | Friday 04 July 2025 18:18:16 +0000 (0:00:00.147) 0:00:10.702 *********** 2025-07-04 18:20:00.817733 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817740 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.817747 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.817754 | orchestrator | 2025-07-04 18:20:00.817761 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-04 18:20:00.817768 | orchestrator | Friday 04 July 2025 18:18:16 +0000 (0:00:00.517) 0:00:11.219 *********** 2025-07-04 18:20:00.817774 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:00.817781 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:00.817789 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:00.817797 | orchestrator | 2025-07-04 18:20:00.817805 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-04 18:20:00.817813 | orchestrator | Friday 04 July 2025 18:18:17 +0000 (0:00:00.323) 0:00:11.542 *********** 2025-07-04 18:20:00.817821 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817829 | orchestrator | 2025-07-04 18:20:00.817837 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-04 18:20:00.817845 | orchestrator | Friday 04 July 2025 18:18:17 +0000 (0:00:00.120) 0:00:11.663 *********** 2025-07-04 18:20:00.817854 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817862 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.817869 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.817885 | orchestrator | 2025-07-04 18:20:00.817893 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-04 18:20:00.817902 | orchestrator | Friday 04 July 2025 18:18:17 +0000 (0:00:00.289) 0:00:11.953 *********** 2025-07-04 18:20:00.817913 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:00.817922 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:00.817930 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:00.817938 | orchestrator | 2025-07-04 18:20:00.817946 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-04 18:20:00.817953 | orchestrator | Friday 04 July 2025 18:18:18 +0000 (0:00:00.519) 0:00:12.472 *********** 2025-07-04 18:20:00.817961 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.817969 | orchestrator | 2025-07-04 18:20:00.817976 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-04 18:20:00.817984 | orchestrator | Friday 04 July 2025 18:18:18 +0000 (0:00:00.149) 0:00:12.622 *********** 2025-07-04 18:20:00.817993 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.818000 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.818008 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.818080 | orchestrator | 2025-07-04 18:20:00.818091 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-07-04 18:20:00.818100 | orchestrator | Friday 04 July 2025 18:18:18 +0000 (0:00:00.316) 0:00:12.939 *********** 2025-07-04 18:20:00.818108 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:20:00.818117 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:20:00.818126 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:00.818135 | orchestrator | 2025-07-04 18:20:00.818143 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-07-04 18:20:00.818151 | orchestrator | Friday 04 July 2025 18:18:20 +0000 (0:00:01.688) 0:00:14.628 *********** 2025-07-04 18:20:00.818159 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-04 18:20:00.818167 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-04 18:20:00.818174 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-04 18:20:00.818182 | orchestrator | 2025-07-04 18:20:00.818190 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-07-04 18:20:00.818197 | orchestrator | Friday 04 July 2025 18:18:22 +0000 (0:00:01.959) 0:00:16.587 *********** 2025-07-04 18:20:00.818205 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-04 18:20:00.818213 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-04 18:20:00.818220 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-04 18:20:00.818228 | orchestrator | 2025-07-04 18:20:00.818242 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-07-04 18:20:00.818255 | orchestrator | Friday 04 July 2025 18:18:24 +0000 (0:00:02.416) 0:00:19.004 *********** 2025-07-04 18:20:00.818263 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-04 18:20:00.818270 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-04 18:20:00.818278 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-04 18:20:00.818285 | orchestrator | 2025-07-04 18:20:00.818292 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-07-04 18:20:00.818299 | orchestrator | Friday 04 July 2025 18:18:26 +0000 (0:00:01.637) 0:00:20.642 *********** 2025-07-04 18:20:00.818306 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.818312 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.818318 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.818331 | orchestrator | 2025-07-04 18:20:00.818338 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-07-04 18:20:00.818345 | orchestrator | Friday 04 July 2025 18:18:26 +0000 (0:00:00.355) 0:00:20.997 *********** 2025-07-04 18:20:00.818352 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.818360 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.818367 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.818375 | orchestrator | 2025-07-04 18:20:00.818382 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-04 18:20:00.818389 | orchestrator | Friday 04 July 2025 18:18:27 +0000 (0:00:00.358) 0:00:21.356 *********** 2025-07-04 18:20:00.818396 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:20:00.818404 | orchestrator | 2025-07-04 18:20:00.818412 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-07-04 18:20:00.818419 | orchestrator | Friday 04 July 2025 18:18:27 +0000 (0:00:00.750) 0:00:22.106 *********** 2025-07-04 18:20:00.818429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-04 18:20:00.818456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-04 18:20:00.818470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-04 18:20:00.818478 | orchestrator | 2025-07-04 18:20:00.818486 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-07-04 18:20:00.818493 | orchestrator | Friday 04 July 2025 18:18:29 +0000 (0:00:01.590) 0:00:23.697 *********** 2025-07-04 18:20:00.818509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-04 18:20:00.818674 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.818697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-04 18:20:00.818716 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.818724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-04 18:20:00.818731 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.818738 | orchestrator | 2025-07-04 18:20:00.818746 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-07-04 18:20:00.818753 | orchestrator | Friday 04 July 2025 18:18:30 +0000 (0:00:00.664) 0:00:24.361 *********** 2025-07-04 18:20:00.818770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-04 18:20:00.818783 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.818790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-04 18:20:00.818798 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.818815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-04 18:20:00.818827 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.818834 | orchestrator | 2025-07-04 18:20:00.818839 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-07-04 18:20:00.818845 | orchestrator | Friday 04 July 2025 18:18:31 +0000 (0:00:01.156) 0:00:25.517 *********** 2025-07-04 18:20:00.818852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-04 18:20:00.818868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-04 18:20:00.818883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-04 18:20:00.818895 | orchestrator | 2025-07-04 18:20:00.818902 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-04 18:20:00.818909 | orchestrator | Friday 04 July 2025 18:18:32 +0000 (0:00:01.421) 0:00:26.938 *********** 2025-07-04 18:20:00.818916 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:00.818923 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:00.818930 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:00.818937 | orchestrator | 2025-07-04 18:20:00.818944 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-04 18:20:00.818955 | orchestrator | Friday 04 July 2025 18:18:32 +0000 (0:00:00.299) 0:00:27.238 *********** 2025-07-04 18:20:00.818965 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:20:00.818973 | orchestrator | 2025-07-04 18:20:00.818979 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-07-04 18:20:00.818986 | orchestrator | Friday 04 July 2025 18:18:33 +0000 (0:00:00.782) 0:00:28.021 *********** 2025-07-04 18:20:00.818993 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:00.819000 | orchestrator | 2025-07-04 18:20:00.819007 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-07-04 18:20:00.819014 | orchestrator | Friday 04 July 2025 18:18:35 +0000 (0:00:02.146) 0:00:30.168 *********** 2025-07-04 18:20:00.819079 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:00.819089 | orchestrator | 2025-07-04 18:20:00.819096 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-07-04 18:20:00.819103 | orchestrator | Friday 04 July 2025 18:18:37 +0000 (0:00:02.082) 0:00:32.250 *********** 2025-07-04 18:20:00.819110 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:00.819116 | orchestrator | 2025-07-04 18:20:00.819123 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-04 18:20:00.819130 | orchestrator | Friday 04 July 2025 18:18:53 +0000 (0:00:15.715) 0:00:47.966 *********** 2025-07-04 18:20:00.819138 | orchestrator | 2025-07-04 18:20:00.819145 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-04 18:20:00.819152 | orchestrator | Friday 04 July 2025 18:18:53 +0000 (0:00:00.066) 0:00:48.033 *********** 2025-07-04 18:20:00.819159 | orchestrator | 2025-07-04 18:20:00.819166 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-04 18:20:00.819173 | orchestrator | Friday 04 July 2025 18:18:53 +0000 (0:00:00.064) 0:00:48.097 *********** 2025-07-04 18:20:00.819180 | orchestrator | 2025-07-04 18:20:00.819186 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-07-04 18:20:00.819192 | orchestrator | Friday 04 July 2025 18:18:53 +0000 (0:00:00.066) 0:00:48.164 *********** 2025-07-04 18:20:00.819198 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:00.819205 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:20:00.819211 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:20:00.819217 | orchestrator | 2025-07-04 18:20:00.819223 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:20:00.819230 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-07-04 18:20:00.819237 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-04 18:20:00.819242 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-04 18:20:00.819248 | orchestrator | 2025-07-04 18:20:00.819254 | orchestrator | 2025-07-04 18:20:00.819260 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:20:00.819266 | orchestrator | Friday 04 July 2025 18:19:59 +0000 (0:01:06.044) 0:01:54.208 *********** 2025-07-04 18:20:00.819273 | orchestrator | =============================================================================== 2025-07-04 18:20:00.819286 | orchestrator | horizon : Restart horizon container ------------------------------------ 66.04s 2025-07-04 18:20:00.819292 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.72s 2025-07-04 18:20:00.819299 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.42s 2025-07-04 18:20:00.819304 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.15s 2025-07-04 18:20:00.819310 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.08s 2025-07-04 18:20:00.819316 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.96s 2025-07-04 18:20:00.819321 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.69s 2025-07-04 18:20:00.819327 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.64s 2025-07-04 18:20:00.819333 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.59s 2025-07-04 18:20:00.819339 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.42s 2025-07-04 18:20:00.819347 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.40s 2025-07-04 18:20:00.819354 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.16s 2025-07-04 18:20:00.819361 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.82s 2025-07-04 18:20:00.819369 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2025-07-04 18:20:00.819376 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-07-04 18:20:00.819384 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.66s 2025-07-04 18:20:00.819391 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.55s 2025-07-04 18:20:00.819398 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2025-07-04 18:20:00.819404 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2025-07-04 18:20:00.819410 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-07-04 18:20:00.819417 | orchestrator | 2025-07-04 18:20:00 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:00.819430 | orchestrator | 2025-07-04 18:20:00 | INFO  | Task 5567f1e3-be37-493c-b329-c696f211457c is in state STARTED 2025-07-04 18:20:00.819447 | orchestrator | 2025-07-04 18:20:00 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:03.877046 | orchestrator | 2025-07-04 18:20:03 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:03.879640 | orchestrator | 2025-07-04 18:20:03 | INFO  | Task 5567f1e3-be37-493c-b329-c696f211457c is in state STARTED 2025-07-04 18:20:03.879767 | orchestrator | 2025-07-04 18:20:03 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:06.931199 | orchestrator | 2025-07-04 18:20:06 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:06.932749 | orchestrator | 2025-07-04 18:20:06 | INFO  | Task 5567f1e3-be37-493c-b329-c696f211457c is in state STARTED 2025-07-04 18:20:06.932972 | orchestrator | 2025-07-04 18:20:06 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:09.992263 | orchestrator | 2025-07-04 18:20:09 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:09.993172 | orchestrator | 2025-07-04 18:20:09 | INFO  | Task 5567f1e3-be37-493c-b329-c696f211457c is in state STARTED 2025-07-04 18:20:09.993434 | orchestrator | 2025-07-04 18:20:09 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:13.038520 | orchestrator | 2025-07-04 18:20:13 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:13.040237 | orchestrator | 2025-07-04 18:20:13 | INFO  | Task 5567f1e3-be37-493c-b329-c696f211457c is in state STARTED 2025-07-04 18:20:13.040312 | orchestrator | 2025-07-04 18:20:13 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:16.095153 | orchestrator | 2025-07-04 18:20:16 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:16.095929 | orchestrator | 2025-07-04 18:20:16 | INFO  | Task 5567f1e3-be37-493c-b329-c696f211457c is in state STARTED 2025-07-04 18:20:16.095960 | orchestrator | 2025-07-04 18:20:16 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:19.149301 | orchestrator | 2025-07-04 18:20:19 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:19.151130 | orchestrator | 2025-07-04 18:20:19 | INFO  | Task 5567f1e3-be37-493c-b329-c696f211457c is in state STARTED 2025-07-04 18:20:19.151194 | orchestrator | 2025-07-04 18:20:19 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:22.216141 | orchestrator | 2025-07-04 18:20:22 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:22.216247 | orchestrator | 2025-07-04 18:20:22 | INFO  | Task 5567f1e3-be37-493c-b329-c696f211457c is in state SUCCESS 2025-07-04 18:20:22.219216 | orchestrator | 2025-07-04 18:20:22 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:20:22.219754 | orchestrator | 2025-07-04 18:20:22 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:25.276170 | orchestrator | 2025-07-04 18:20:25 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:25.277781 | orchestrator | 2025-07-04 18:20:25 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:20:25.277997 | orchestrator | 2025-07-04 18:20:25 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:28.343315 | orchestrator | 2025-07-04 18:20:28 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:28.344102 | orchestrator | 2025-07-04 18:20:28 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:20:28.344157 | orchestrator | 2025-07-04 18:20:28 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:31.397720 | orchestrator | 2025-07-04 18:20:31 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:31.398156 | orchestrator | 2025-07-04 18:20:31 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:20:31.398192 | orchestrator | 2025-07-04 18:20:31 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:34.452296 | orchestrator | 2025-07-04 18:20:34 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:34.453685 | orchestrator | 2025-07-04 18:20:34 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:20:34.453731 | orchestrator | 2025-07-04 18:20:34 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:37.499733 | orchestrator | 2025-07-04 18:20:37 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:37.501836 | orchestrator | 2025-07-04 18:20:37 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:20:37.501871 | orchestrator | 2025-07-04 18:20:37 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:40.549433 | orchestrator | 2025-07-04 18:20:40 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:40.551682 | orchestrator | 2025-07-04 18:20:40 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:20:40.551721 | orchestrator | 2025-07-04 18:20:40 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:43.640436 | orchestrator | 2025-07-04 18:20:43 | INFO  | Task ef65ad7c-c90f-416c-9489-60cf4bb64592 is in state STARTED 2025-07-04 18:20:43.643879 | orchestrator | 2025-07-04 18:20:43 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:43.645153 | orchestrator | 2025-07-04 18:20:43 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:20:43.645223 | orchestrator | 2025-07-04 18:20:43 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:46.696480 | orchestrator | 2025-07-04 18:20:46 | INFO  | Task ef65ad7c-c90f-416c-9489-60cf4bb64592 is in state STARTED 2025-07-04 18:20:46.699311 | orchestrator | 2025-07-04 18:20:46 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:46.702831 | orchestrator | 2025-07-04 18:20:46 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:20:46.702895 | orchestrator | 2025-07-04 18:20:46 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:49.763352 | orchestrator | 2025-07-04 18:20:49 | INFO  | Task ef65ad7c-c90f-416c-9489-60cf4bb64592 is in state STARTED 2025-07-04 18:20:49.768075 | orchestrator | 2025-07-04 18:20:49 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:49.772604 | orchestrator | 2025-07-04 18:20:49 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:20:49.772677 | orchestrator | 2025-07-04 18:20:49 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:52.839682 | orchestrator | 2025-07-04 18:20:52 | INFO  | Task ef65ad7c-c90f-416c-9489-60cf4bb64592 is in state STARTED 2025-07-04 18:20:52.841289 | orchestrator | 2025-07-04 18:20:52 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:52.842564 | orchestrator | 2025-07-04 18:20:52 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:20:52.842883 | orchestrator | 2025-07-04 18:20:52 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:55.893183 | orchestrator | 2025-07-04 18:20:55 | INFO  | Task ef65ad7c-c90f-416c-9489-60cf4bb64592 is in state STARTED 2025-07-04 18:20:55.893605 | orchestrator | 2025-07-04 18:20:55 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state STARTED 2025-07-04 18:20:55.895399 | orchestrator | 2025-07-04 18:20:55 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:20:55.895454 | orchestrator | 2025-07-04 18:20:55 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:20:58.941618 | orchestrator | 2025-07-04 18:20:58 | INFO  | Task ef65ad7c-c90f-416c-9489-60cf4bb64592 is in state STARTED 2025-07-04 18:20:58.942078 | orchestrator | 2025-07-04 18:20:58 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:20:58.943214 | orchestrator | 2025-07-04 18:20:58 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:20:58.946365 | orchestrator | 2025-07-04 18:20:58 | INFO  | Task 596705d4-5870-49b3-8d05-ce4baf4ccf43 is in state SUCCESS 2025-07-04 18:20:58.949467 | orchestrator | 2025-07-04 18:20:58.949532 | orchestrator | 2025-07-04 18:20:58.949546 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-07-04 18:20:58.949559 | orchestrator | 2025-07-04 18:20:58.949570 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-07-04 18:20:58.949582 | orchestrator | Friday 04 July 2025 18:19:54 +0000 (0:00:00.170) 0:00:00.170 *********** 2025-07-04 18:20:58.949599 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-07-04 18:20:58.949622 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-04 18:20:58.949670 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-04 18:20:58.949692 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-07-04 18:20:58.949710 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-04 18:20:58.949721 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-07-04 18:20:58.949740 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-07-04 18:20:58.949751 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-07-04 18:20:58.949762 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-07-04 18:20:58.949772 | orchestrator | 2025-07-04 18:20:58.949783 | orchestrator | TASK [Create share directory] ************************************************** 2025-07-04 18:20:58.949794 | orchestrator | Friday 04 July 2025 18:19:58 +0000 (0:00:04.295) 0:00:04.465 *********** 2025-07-04 18:20:58.949805 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-04 18:20:58.949816 | orchestrator | 2025-07-04 18:20:58.949827 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-07-04 18:20:58.949837 | orchestrator | Friday 04 July 2025 18:19:59 +0000 (0:00:01.055) 0:00:05.521 *********** 2025-07-04 18:20:58.949848 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-07-04 18:20:58.949865 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-04 18:20:58.949883 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-04 18:20:58.949902 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-07-04 18:20:58.949919 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-04 18:20:58.949936 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-07-04 18:20:58.949954 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-07-04 18:20:58.949971 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-07-04 18:20:58.950092 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-07-04 18:20:58.950121 | orchestrator | 2025-07-04 18:20:58.950141 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-07-04 18:20:58.950160 | orchestrator | Friday 04 July 2025 18:20:13 +0000 (0:00:13.584) 0:00:19.105 *********** 2025-07-04 18:20:58.950181 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-07-04 18:20:58.950194 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-04 18:20:58.950206 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-04 18:20:58.950218 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-07-04 18:20:58.950231 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-04 18:20:58.950244 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-07-04 18:20:58.950256 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-07-04 18:20:58.950268 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-07-04 18:20:58.950280 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-07-04 18:20:58.950293 | orchestrator | 2025-07-04 18:20:58.950305 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:20:58.950318 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:20:58.950344 | orchestrator | 2025-07-04 18:20:58.950471 | orchestrator | 2025-07-04 18:20:58.950484 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:20:58.950495 | orchestrator | Friday 04 July 2025 18:20:20 +0000 (0:00:07.121) 0:00:26.226 *********** 2025-07-04 18:20:58.950506 | orchestrator | =============================================================================== 2025-07-04 18:20:58.950517 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.58s 2025-07-04 18:20:58.950528 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.12s 2025-07-04 18:20:58.950538 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.30s 2025-07-04 18:20:58.950549 | orchestrator | Create share directory -------------------------------------------------- 1.06s 2025-07-04 18:20:58.950560 | orchestrator | 2025-07-04 18:20:58.950570 | orchestrator | 2025-07-04 18:20:58.950581 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:20:58.950598 | orchestrator | 2025-07-04 18:20:58.950677 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:20:58.950701 | orchestrator | Friday 04 July 2025 18:18:05 +0000 (0:00:00.256) 0:00:00.256 *********** 2025-07-04 18:20:58.950721 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:58.950739 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:58.950754 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:58.950764 | orchestrator | 2025-07-04 18:20:58.950775 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:20:58.950786 | orchestrator | Friday 04 July 2025 18:18:06 +0000 (0:00:00.285) 0:00:00.542 *********** 2025-07-04 18:20:58.950796 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-04 18:20:58.950808 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-04 18:20:58.950819 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-04 18:20:58.950829 | orchestrator | 2025-07-04 18:20:58.950840 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-07-04 18:20:58.950850 | orchestrator | 2025-07-04 18:20:58.950861 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-04 18:20:58.950871 | orchestrator | Friday 04 July 2025 18:18:06 +0000 (0:00:00.463) 0:00:01.005 *********** 2025-07-04 18:20:58.950891 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:20:58.950902 | orchestrator | 2025-07-04 18:20:58.950913 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-07-04 18:20:58.950923 | orchestrator | Friday 04 July 2025 18:18:07 +0000 (0:00:00.543) 0:00:01.548 *********** 2025-07-04 18:20:58.950941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.950959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.951115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.951147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-04 18:20:58.951179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-04 18:20:58.951199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-04 18:20:58.951222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.951262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.951285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.951305 | orchestrator | 2025-07-04 18:20:58.951325 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-07-04 18:20:58.951338 | orchestrator | Friday 04 July 2025 18:18:09 +0000 (0:00:01.909) 0:00:03.458 *********** 2025-07-04 18:20:58.951360 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-07-04 18:20:58.951373 | orchestrator | 2025-07-04 18:20:58.951385 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-07-04 18:20:58.951397 | orchestrator | Friday 04 July 2025 18:18:09 +0000 (0:00:00.949) 0:00:04.407 *********** 2025-07-04 18:20:58.951409 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:58.951422 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:58.951434 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:58.951447 | orchestrator | 2025-07-04 18:20:58.951460 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-07-04 18:20:58.951472 | orchestrator | Friday 04 July 2025 18:18:10 +0000 (0:00:00.546) 0:00:04.953 *********** 2025-07-04 18:20:58.951485 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 18:20:58.951497 | orchestrator | 2025-07-04 18:20:58.951510 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-04 18:20:58.951523 | orchestrator | Friday 04 July 2025 18:18:11 +0000 (0:00:00.679) 0:00:05.633 *********** 2025-07-04 18:20:58.951535 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:20:58.951546 | orchestrator | 2025-07-04 18:20:58.951556 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-07-04 18:20:58.951573 | orchestrator | Friday 04 July 2025 18:18:11 +0000 (0:00:00.523) 0:00:06.157 *********** 2025-07-04 18:20:58.951586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.951619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.951638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.951669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-04 18:20:58.951696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-04 18:20:58.951709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-04 18:20:58.951726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.951736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.951746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.951756 | orchestrator | 2025-07-04 18:20:58.951766 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-07-04 18:20:58.951775 | orchestrator | Friday 04 July 2025 18:18:15 +0000 (0:00:03.484) 0:00:09.641 *********** 2025-07-04 18:20:58.951794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-04 18:20:58.951810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:20:58.951827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-04 18:20:58.951838 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.951849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-04 18:20:58.951860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:20:58.951876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-04 18:20:58.951887 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:58.951902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-04 18:20:58.951919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:20:58.951930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-04 18:20:58.951940 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:58.951950 | orchestrator | 2025-07-04 18:20:58.951959 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-07-04 18:20:58.951969 | orchestrator | Friday 04 July 2025 18:18:15 +0000 (0:00:00.603) 0:00:10.244 *********** 2025-07-04 18:20:58.952008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-04 18:20:58.952030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:20:58.952045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-04 18:20:58.952061 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.952072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-04 18:20:58.952083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:20:58.952093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-04 18:20:58.952103 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:58.952121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-04 18:20:58.952137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:20:58.952153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-04 18:20:58.952163 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:58.952173 | orchestrator | 2025-07-04 18:20:58.952183 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-07-04 18:20:58.952193 | orchestrator | Friday 04 July 2025 18:18:16 +0000 (0:00:00.778) 0:00:11.023 *********** 2025-07-04 18:20:58.952203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.952215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.952243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.952272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-04 18:20:58.952394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-04 18:20:58.952431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-04 18:20:58.952442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.952452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.952475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.952494 | orchestrator | 2025-07-04 18:20:58.952504 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-07-04 18:20:58.952514 | orchestrator | Friday 04 July 2025 18:18:20 +0000 (0:00:03.432) 0:00:14.456 *********** 2025-07-04 18:20:58.952530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.952541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:20:58.952552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.952718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:20:58.952756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.952773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:20:58.952784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.952794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.952804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.952814 | orchestrator | 2025-07-04 18:20:58.952824 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-07-04 18:20:58.952834 | orchestrator | Friday 04 July 2025 18:18:25 +0000 (0:00:05.180) 0:00:19.636 *********** 2025-07-04 18:20:58.952844 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:58.952854 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:20:58.952864 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:20:58.952879 | orchestrator | 2025-07-04 18:20:58.952889 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-07-04 18:20:58.952898 | orchestrator | Friday 04 July 2025 18:18:26 +0000 (0:00:01.438) 0:00:21.074 *********** 2025-07-04 18:20:58.952908 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.952917 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:58.952926 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:58.952936 | orchestrator | 2025-07-04 18:20:58.952946 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-07-04 18:20:58.952961 | orchestrator | Friday 04 July 2025 18:18:27 +0000 (0:00:00.646) 0:00:21.721 *********** 2025-07-04 18:20:58.952970 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.953013 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:58.953030 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:58.953047 | orchestrator | 2025-07-04 18:20:58.953062 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-07-04 18:20:58.953073 | orchestrator | Friday 04 July 2025 18:18:27 +0000 (0:00:00.491) 0:00:22.213 *********** 2025-07-04 18:20:58.953082 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.953092 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:58.953101 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:58.953111 | orchestrator | 2025-07-04 18:20:58.953120 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-07-04 18:20:58.953130 | orchestrator | Friday 04 July 2025 18:18:28 +0000 (0:00:00.312) 0:00:22.525 *********** 2025-07-04 18:20:58.953146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.953158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:20:58.953169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.953187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:20:58.953205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.953222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-04 18:20:58.953232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.953243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.953253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.953270 | orchestrator | 2025-07-04 18:20:58.953280 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-04 18:20:58.953294 | orchestrator | Friday 04 July 2025 18:18:30 +0000 (0:00:02.358) 0:00:24.884 *********** 2025-07-04 18:20:58.953310 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.953326 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:58.953342 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:58.953358 | orchestrator | 2025-07-04 18:20:58.953375 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-07-04 18:20:58.953394 | orchestrator | Friday 04 July 2025 18:18:30 +0000 (0:00:00.328) 0:00:25.212 *********** 2025-07-04 18:20:58.953410 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-04 18:20:58.953427 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-04 18:20:58.953445 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-04 18:20:58.953461 | orchestrator | 2025-07-04 18:20:58.953485 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-07-04 18:20:58.953501 | orchestrator | Friday 04 July 2025 18:18:32 +0000 (0:00:02.097) 0:00:27.310 *********** 2025-07-04 18:20:58.953521 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 18:20:58.953539 | orchestrator | 2025-07-04 18:20:58.953556 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-07-04 18:20:58.953573 | orchestrator | Friday 04 July 2025 18:18:33 +0000 (0:00:00.919) 0:00:28.229 *********** 2025-07-04 18:20:58.953591 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.953607 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:58.953625 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:58.953641 | orchestrator | 2025-07-04 18:20:58.953657 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-07-04 18:20:58.953673 | orchestrator | Friday 04 July 2025 18:18:34 +0000 (0:00:00.639) 0:00:28.869 *********** 2025-07-04 18:20:58.953690 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-04 18:20:58.953708 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 18:20:58.953724 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-04 18:20:58.953739 | orchestrator | 2025-07-04 18:20:58.953749 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-07-04 18:20:58.953765 | orchestrator | Friday 04 July 2025 18:18:35 +0000 (0:00:01.135) 0:00:30.005 *********** 2025-07-04 18:20:58.953775 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:58.953785 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:58.953795 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:58.953804 | orchestrator | 2025-07-04 18:20:58.953814 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-07-04 18:20:58.953823 | orchestrator | Friday 04 July 2025 18:18:35 +0000 (0:00:00.293) 0:00:30.298 *********** 2025-07-04 18:20:58.953832 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-04 18:20:58.953842 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-04 18:20:58.953851 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-04 18:20:58.953861 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-04 18:20:58.953880 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-04 18:20:58.953890 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-04 18:20:58.953899 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-04 18:20:58.953909 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-04 18:20:58.953918 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-04 18:20:58.953928 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-04 18:20:58.953938 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-04 18:20:58.953947 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-04 18:20:58.953957 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-04 18:20:58.953967 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-04 18:20:58.954013 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-04 18:20:58.954074 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-04 18:20:58.954085 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-04 18:20:58.954095 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-04 18:20:58.954104 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-04 18:20:58.954114 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-04 18:20:58.954123 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-04 18:20:58.954133 | orchestrator | 2025-07-04 18:20:58.954143 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-07-04 18:20:58.954152 | orchestrator | Friday 04 July 2025 18:18:44 +0000 (0:00:08.894) 0:00:39.193 *********** 2025-07-04 18:20:58.954162 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-04 18:20:58.954171 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-04 18:20:58.954181 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-04 18:20:58.954190 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-04 18:20:58.954199 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-04 18:20:58.954209 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-04 18:20:58.954219 | orchestrator | 2025-07-04 18:20:58.954229 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-07-04 18:20:58.954247 | orchestrator | Friday 04 July 2025 18:18:47 +0000 (0:00:02.651) 0:00:41.844 *********** 2025-07-04 18:20:58.954265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.954285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.954298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-04 18:20:58.954309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-04 18:20:58.954329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-04 18:20:58.954348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-04 18:20:58.954366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.954377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.954388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-04 18:20:58.954410 | orchestrator | 2025-07-04 18:20:58.954421 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-04 18:20:58.954430 | orchestrator | Friday 04 July 2025 18:18:49 +0000 (0:00:02.328) 0:00:44.172 *********** 2025-07-04 18:20:58.954440 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.954451 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:58.954461 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:58.954471 | orchestrator | 2025-07-04 18:20:58.954481 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-07-04 18:20:58.954499 | orchestrator | Friday 04 July 2025 18:18:50 +0000 (0:00:00.291) 0:00:44.464 *********** 2025-07-04 18:20:58.954517 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:58.954534 | orchestrator | 2025-07-04 18:20:58.954551 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-07-04 18:20:58.954570 | orchestrator | Friday 04 July 2025 18:18:52 +0000 (0:00:02.224) 0:00:46.688 *********** 2025-07-04 18:20:58.954589 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:58.954607 | orchestrator | 2025-07-04 18:20:58.954795 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-07-04 18:20:58.954809 | orchestrator | Friday 04 July 2025 18:18:55 +0000 (0:00:02.778) 0:00:49.467 *********** 2025-07-04 18:20:58.954819 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:58.954828 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:58.954838 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:58.954848 | orchestrator | 2025-07-04 18:20:58.954859 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-07-04 18:20:58.954869 | orchestrator | Friday 04 July 2025 18:18:56 +0000 (0:00:01.165) 0:00:50.632 *********** 2025-07-04 18:20:58.954891 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:58.954900 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:58.954910 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:58.954920 | orchestrator | 2025-07-04 18:20:58.954939 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-07-04 18:20:58.954950 | orchestrator | Friday 04 July 2025 18:18:56 +0000 (0:00:00.337) 0:00:50.970 *********** 2025-07-04 18:20:58.954960 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.954970 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:58.955011 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:58.955022 | orchestrator | 2025-07-04 18:20:58.955031 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-07-04 18:20:58.955041 | orchestrator | Friday 04 July 2025 18:18:56 +0000 (0:00:00.333) 0:00:51.304 *********** 2025-07-04 18:20:58.955051 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:58.955060 | orchestrator | 2025-07-04 18:20:58.955070 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-07-04 18:20:58.955079 | orchestrator | Friday 04 July 2025 18:19:10 +0000 (0:00:13.567) 0:01:04.871 *********** 2025-07-04 18:20:58.955089 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:58.955099 | orchestrator | 2025-07-04 18:20:58.955108 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-04 18:20:58.955118 | orchestrator | Friday 04 July 2025 18:19:20 +0000 (0:00:10.219) 0:01:15.091 *********** 2025-07-04 18:20:58.955127 | orchestrator | 2025-07-04 18:20:58.955137 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-04 18:20:58.955154 | orchestrator | Friday 04 July 2025 18:19:20 +0000 (0:00:00.254) 0:01:15.346 *********** 2025-07-04 18:20:58.955164 | orchestrator | 2025-07-04 18:20:58.955174 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-04 18:20:58.955183 | orchestrator | Friday 04 July 2025 18:19:20 +0000 (0:00:00.065) 0:01:15.411 *********** 2025-07-04 18:20:58.955193 | orchestrator | 2025-07-04 18:20:58.955203 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-07-04 18:20:58.955213 | orchestrator | Friday 04 July 2025 18:19:21 +0000 (0:00:00.061) 0:01:15.473 *********** 2025-07-04 18:20:58.955222 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:58.955232 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:20:58.955241 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:20:58.955250 | orchestrator | 2025-07-04 18:20:58.955260 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-07-04 18:20:58.955270 | orchestrator | Friday 04 July 2025 18:19:50 +0000 (0:00:29.750) 0:01:45.223 *********** 2025-07-04 18:20:58.955280 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:20:58.955290 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:20:58.955299 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:58.955309 | orchestrator | 2025-07-04 18:20:58.955318 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-07-04 18:20:58.955328 | orchestrator | Friday 04 July 2025 18:19:58 +0000 (0:00:07.633) 0:01:52.856 *********** 2025-07-04 18:20:58.955337 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:58.955347 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:20:58.955357 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:20:58.955366 | orchestrator | 2025-07-04 18:20:58.955376 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-04 18:20:58.955386 | orchestrator | Friday 04 July 2025 18:20:09 +0000 (0:00:11.452) 0:02:04.309 *********** 2025-07-04 18:20:58.955398 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:20:58.955409 | orchestrator | 2025-07-04 18:20:58.955420 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-07-04 18:20:58.955431 | orchestrator | Friday 04 July 2025 18:20:10 +0000 (0:00:00.834) 0:02:05.144 *********** 2025-07-04 18:20:58.955442 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:20:58.955460 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:20:58.955471 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:58.955482 | orchestrator | 2025-07-04 18:20:58.955493 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-07-04 18:20:58.955504 | orchestrator | Friday 04 July 2025 18:20:11 +0000 (0:00:00.770) 0:02:05.914 *********** 2025-07-04 18:20:58.955515 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:20:58.955526 | orchestrator | 2025-07-04 18:20:58.955536 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-07-04 18:20:58.955548 | orchestrator | Friday 04 July 2025 18:20:13 +0000 (0:00:01.767) 0:02:07.682 *********** 2025-07-04 18:20:58.955559 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-07-04 18:20:58.955570 | orchestrator | 2025-07-04 18:20:58.955581 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-07-04 18:20:58.955592 | orchestrator | Friday 04 July 2025 18:20:24 +0000 (0:00:11.073) 0:02:18.756 *********** 2025-07-04 18:20:58.955603 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-07-04 18:20:58.955619 | orchestrator | 2025-07-04 18:20:58.955635 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-07-04 18:20:58.955654 | orchestrator | Friday 04 July 2025 18:20:45 +0000 (0:00:21.378) 0:02:40.134 *********** 2025-07-04 18:20:58.955669 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-07-04 18:20:58.955685 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-07-04 18:20:58.955703 | orchestrator | 2025-07-04 18:20:58.955722 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-07-04 18:20:58.955738 | orchestrator | Friday 04 July 2025 18:20:52 +0000 (0:00:06.901) 0:02:47.036 *********** 2025-07-04 18:20:58.955755 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.955771 | orchestrator | 2025-07-04 18:20:58.955788 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-07-04 18:20:58.955804 | orchestrator | Friday 04 July 2025 18:20:52 +0000 (0:00:00.337) 0:02:47.373 *********** 2025-07-04 18:20:58.955822 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.955837 | orchestrator | 2025-07-04 18:20:58.955853 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-07-04 18:20:58.955868 | orchestrator | Friday 04 July 2025 18:20:53 +0000 (0:00:00.136) 0:02:47.510 *********** 2025-07-04 18:20:58.955886 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.955902 | orchestrator | 2025-07-04 18:20:58.955930 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-07-04 18:20:58.955940 | orchestrator | Friday 04 July 2025 18:20:53 +0000 (0:00:00.144) 0:02:47.654 *********** 2025-07-04 18:20:58.955950 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.955959 | orchestrator | 2025-07-04 18:20:58.955969 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-07-04 18:20:58.956015 | orchestrator | Friday 04 July 2025 18:20:53 +0000 (0:00:00.377) 0:02:48.032 *********** 2025-07-04 18:20:58.956026 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:20:58.956035 | orchestrator | 2025-07-04 18:20:58.956045 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-04 18:20:58.956054 | orchestrator | Friday 04 July 2025 18:20:56 +0000 (0:00:03.170) 0:02:51.202 *********** 2025-07-04 18:20:58.956064 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:20:58.956074 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:20:58.956083 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:20:58.956093 | orchestrator | 2025-07-04 18:20:58.956103 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:20:58.956120 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-07-04 18:20:58.956132 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-04 18:20:58.956151 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-04 18:20:58.956161 | orchestrator | 2025-07-04 18:20:58.956171 | orchestrator | 2025-07-04 18:20:58.956180 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:20:58.956190 | orchestrator | Friday 04 July 2025 18:20:57 +0000 (0:00:00.662) 0:02:51.865 *********** 2025-07-04 18:20:58.956199 | orchestrator | =============================================================================== 2025-07-04 18:20:58.956209 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 29.75s 2025-07-04 18:20:58.956219 | orchestrator | service-ks-register : keystone | Creating services --------------------- 21.38s 2025-07-04 18:20:58.956228 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.57s 2025-07-04 18:20:58.956238 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.45s 2025-07-04 18:20:58.956247 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.07s 2025-07-04 18:20:58.956257 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.22s 2025-07-04 18:20:58.956266 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.89s 2025-07-04 18:20:58.956276 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.63s 2025-07-04 18:20:58.956285 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.90s 2025-07-04 18:20:58.956295 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.18s 2025-07-04 18:20:58.956304 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.48s 2025-07-04 18:20:58.956314 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.43s 2025-07-04 18:20:58.956323 | orchestrator | keystone : Creating default user role ----------------------------------- 3.17s 2025-07-04 18:20:58.956333 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.78s 2025-07-04 18:20:58.956343 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.65s 2025-07-04 18:20:58.956352 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.36s 2025-07-04 18:20:58.956361 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.33s 2025-07-04 18:20:58.956371 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.22s 2025-07-04 18:20:58.956381 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.10s 2025-07-04 18:20:58.956390 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.91s 2025-07-04 18:20:58.956400 | orchestrator | 2025-07-04 18:20:58 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:20:58.956410 | orchestrator | 2025-07-04 18:20:58 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:20:58.956419 | orchestrator | 2025-07-04 18:20:58 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:20:58.956429 | orchestrator | 2025-07-04 18:20:58 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:02.003831 | orchestrator | 2025-07-04 18:21:02 | INFO  | Task ef65ad7c-c90f-416c-9489-60cf4bb64592 is in state SUCCESS 2025-07-04 18:21:02.004309 | orchestrator | 2025-07-04 18:21:02 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:02.005166 | orchestrator | 2025-07-04 18:21:02 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:02.006114 | orchestrator | 2025-07-04 18:21:02 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:02.006898 | orchestrator | 2025-07-04 18:21:02 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:21:02.012739 | orchestrator | 2025-07-04 18:21:02 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:02.012779 | orchestrator | 2025-07-04 18:21:02 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:05.048506 | orchestrator | 2025-07-04 18:21:05 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:05.052347 | orchestrator | 2025-07-04 18:21:05 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:05.052711 | orchestrator | 2025-07-04 18:21:05 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:05.055608 | orchestrator | 2025-07-04 18:21:05 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:21:05.058295 | orchestrator | 2025-07-04 18:21:05 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:05.058374 | orchestrator | 2025-07-04 18:21:05 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:08.107763 | orchestrator | 2025-07-04 18:21:08 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:08.109955 | orchestrator | 2025-07-04 18:21:08 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:08.112138 | orchestrator | 2025-07-04 18:21:08 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:08.115526 | orchestrator | 2025-07-04 18:21:08 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:21:08.117961 | orchestrator | 2025-07-04 18:21:08 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:08.118092 | orchestrator | 2025-07-04 18:21:08 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:11.170377 | orchestrator | 2025-07-04 18:21:11 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:11.171589 | orchestrator | 2025-07-04 18:21:11 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:11.174230 | orchestrator | 2025-07-04 18:21:11 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:11.174938 | orchestrator | 2025-07-04 18:21:11 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:21:11.177580 | orchestrator | 2025-07-04 18:21:11 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:11.177625 | orchestrator | 2025-07-04 18:21:11 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:14.231816 | orchestrator | 2025-07-04 18:21:14 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:14.233556 | orchestrator | 2025-07-04 18:21:14 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:14.236110 | orchestrator | 2025-07-04 18:21:14 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:14.237690 | orchestrator | 2025-07-04 18:21:14 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:21:14.239123 | orchestrator | 2025-07-04 18:21:14 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:14.239157 | orchestrator | 2025-07-04 18:21:14 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:17.287301 | orchestrator | 2025-07-04 18:21:17 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:17.289518 | orchestrator | 2025-07-04 18:21:17 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:17.292786 | orchestrator | 2025-07-04 18:21:17 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:17.295512 | orchestrator | 2025-07-04 18:21:17 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:21:17.297035 | orchestrator | 2025-07-04 18:21:17 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:17.297266 | orchestrator | 2025-07-04 18:21:17 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:20.329174 | orchestrator | 2025-07-04 18:21:20 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:20.331609 | orchestrator | 2025-07-04 18:21:20 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:20.333587 | orchestrator | 2025-07-04 18:21:20 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:20.335219 | orchestrator | 2025-07-04 18:21:20 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state STARTED 2025-07-04 18:21:20.336737 | orchestrator | 2025-07-04 18:21:20 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:20.336795 | orchestrator | 2025-07-04 18:21:20 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:23.383287 | orchestrator | 2025-07-04 18:21:23 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:21:23.385184 | orchestrator | 2025-07-04 18:21:23 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:23.387319 | orchestrator | 2025-07-04 18:21:23 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:23.389640 | orchestrator | 2025-07-04 18:21:23 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:23.392202 | orchestrator | 2025-07-04 18:21:23 | INFO  | Task 4c09b436-6a68-4b5e-8cf5-0dfe2092dedc is in state SUCCESS 2025-07-04 18:21:23.393640 | orchestrator | 2025-07-04 18:21:23 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:23.393671 | orchestrator | 2025-07-04 18:21:23 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:26.446891 | orchestrator | 2025-07-04 18:21:26 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:21:26.452925 | orchestrator | 2025-07-04 18:21:26 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:26.455418 | orchestrator | 2025-07-04 18:21:26 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:26.457104 | orchestrator | 2025-07-04 18:21:26 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:26.458494 | orchestrator | 2025-07-04 18:21:26 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:26.458671 | orchestrator | 2025-07-04 18:21:26 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:29.502772 | orchestrator | 2025-07-04 18:21:29 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:21:29.505843 | orchestrator | 2025-07-04 18:21:29 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:29.508822 | orchestrator | 2025-07-04 18:21:29 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:29.512466 | orchestrator | 2025-07-04 18:21:29 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:29.513395 | orchestrator | 2025-07-04 18:21:29 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:29.513467 | orchestrator | 2025-07-04 18:21:29 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:32.546689 | orchestrator | 2025-07-04 18:21:32 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:21:32.547654 | orchestrator | 2025-07-04 18:21:32 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:32.548005 | orchestrator | 2025-07-04 18:21:32 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:32.548706 | orchestrator | 2025-07-04 18:21:32 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:32.549537 | orchestrator | 2025-07-04 18:21:32 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:32.549556 | orchestrator | 2025-07-04 18:21:32 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:35.597832 | orchestrator | 2025-07-04 18:21:35 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:21:35.598671 | orchestrator | 2025-07-04 18:21:35 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:35.600321 | orchestrator | 2025-07-04 18:21:35 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:35.601540 | orchestrator | 2025-07-04 18:21:35 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:35.602868 | orchestrator | 2025-07-04 18:21:35 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:35.602885 | orchestrator | 2025-07-04 18:21:35 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:38.640707 | orchestrator | 2025-07-04 18:21:38 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:21:38.643363 | orchestrator | 2025-07-04 18:21:38 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:38.644369 | orchestrator | 2025-07-04 18:21:38 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:38.644711 | orchestrator | 2025-07-04 18:21:38 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:38.646856 | orchestrator | 2025-07-04 18:21:38 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:38.646915 | orchestrator | 2025-07-04 18:21:38 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:41.684688 | orchestrator | 2025-07-04 18:21:41 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:21:41.688141 | orchestrator | 2025-07-04 18:21:41 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:41.692019 | orchestrator | 2025-07-04 18:21:41 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:41.693305 | orchestrator | 2025-07-04 18:21:41 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:41.695468 | orchestrator | 2025-07-04 18:21:41 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:41.696008 | orchestrator | 2025-07-04 18:21:41 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:44.733054 | orchestrator | 2025-07-04 18:21:44 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:21:44.734229 | orchestrator | 2025-07-04 18:21:44 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:44.737375 | orchestrator | 2025-07-04 18:21:44 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:44.738202 | orchestrator | 2025-07-04 18:21:44 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:44.739623 | orchestrator | 2025-07-04 18:21:44 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:44.739798 | orchestrator | 2025-07-04 18:21:44 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:47.776897 | orchestrator | 2025-07-04 18:21:47 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:21:47.778657 | orchestrator | 2025-07-04 18:21:47 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:47.779434 | orchestrator | 2025-07-04 18:21:47 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:47.782731 | orchestrator | 2025-07-04 18:21:47 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:47.783134 | orchestrator | 2025-07-04 18:21:47 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:47.783402 | orchestrator | 2025-07-04 18:21:47 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:50.818876 | orchestrator | 2025-07-04 18:21:50 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:21:50.820075 | orchestrator | 2025-07-04 18:21:50 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:50.821649 | orchestrator | 2025-07-04 18:21:50 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:50.823173 | orchestrator | 2025-07-04 18:21:50 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:50.824628 | orchestrator | 2025-07-04 18:21:50 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:50.824747 | orchestrator | 2025-07-04 18:21:50 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:53.854565 | orchestrator | 2025-07-04 18:21:53 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:21:53.854933 | orchestrator | 2025-07-04 18:21:53 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:53.855792 | orchestrator | 2025-07-04 18:21:53 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:53.856470 | orchestrator | 2025-07-04 18:21:53 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:53.857701 | orchestrator | 2025-07-04 18:21:53 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:53.857798 | orchestrator | 2025-07-04 18:21:53 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:56.895329 | orchestrator | 2025-07-04 18:21:56 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:21:56.897313 | orchestrator | 2025-07-04 18:21:56 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:56.897878 | orchestrator | 2025-07-04 18:21:56 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:56.898600 | orchestrator | 2025-07-04 18:21:56 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:56.899693 | orchestrator | 2025-07-04 18:21:56 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:56.899721 | orchestrator | 2025-07-04 18:21:56 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:21:59.930653 | orchestrator | 2025-07-04 18:21:59 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:21:59.931351 | orchestrator | 2025-07-04 18:21:59 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:21:59.933087 | orchestrator | 2025-07-04 18:21:59 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:21:59.934821 | orchestrator | 2025-07-04 18:21:59 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:21:59.936971 | orchestrator | 2025-07-04 18:21:59 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:21:59.936996 | orchestrator | 2025-07-04 18:21:59 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:02.974304 | orchestrator | 2025-07-04 18:22:02 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:02.974817 | orchestrator | 2025-07-04 18:22:02 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:02.975479 | orchestrator | 2025-07-04 18:22:02 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:02.976757 | orchestrator | 2025-07-04 18:22:02 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:22:02.977399 | orchestrator | 2025-07-04 18:22:02 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:02.977540 | orchestrator | 2025-07-04 18:22:02 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:06.008592 | orchestrator | 2025-07-04 18:22:06 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:06.009103 | orchestrator | 2025-07-04 18:22:06 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:06.010612 | orchestrator | 2025-07-04 18:22:06 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:06.011144 | orchestrator | 2025-07-04 18:22:06 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:22:06.011916 | orchestrator | 2025-07-04 18:22:06 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:06.011975 | orchestrator | 2025-07-04 18:22:06 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:09.042115 | orchestrator | 2025-07-04 18:22:09 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:09.042536 | orchestrator | 2025-07-04 18:22:09 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:09.043034 | orchestrator | 2025-07-04 18:22:09 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:09.044062 | orchestrator | 2025-07-04 18:22:09 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:22:09.044978 | orchestrator | 2025-07-04 18:22:09 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:09.045017 | orchestrator | 2025-07-04 18:22:09 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:12.088739 | orchestrator | 2025-07-04 18:22:12 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:12.089031 | orchestrator | 2025-07-04 18:22:12 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:12.090585 | orchestrator | 2025-07-04 18:22:12 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:12.092342 | orchestrator | 2025-07-04 18:22:12 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:22:12.093115 | orchestrator | 2025-07-04 18:22:12 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:12.093140 | orchestrator | 2025-07-04 18:22:12 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:15.130166 | orchestrator | 2025-07-04 18:22:15 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:15.130416 | orchestrator | 2025-07-04 18:22:15 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:15.131124 | orchestrator | 2025-07-04 18:22:15 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:15.133187 | orchestrator | 2025-07-04 18:22:15 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:22:15.133901 | orchestrator | 2025-07-04 18:22:15 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:15.133948 | orchestrator | 2025-07-04 18:22:15 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:18.164152 | orchestrator | 2025-07-04 18:22:18 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:18.164661 | orchestrator | 2025-07-04 18:22:18 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:18.165899 | orchestrator | 2025-07-04 18:22:18 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:18.166753 | orchestrator | 2025-07-04 18:22:18 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state STARTED 2025-07-04 18:22:18.167825 | orchestrator | 2025-07-04 18:22:18 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:18.168068 | orchestrator | 2025-07-04 18:22:18 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:21.197681 | orchestrator | 2025-07-04 18:22:21 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:21.197765 | orchestrator | 2025-07-04 18:22:21 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:21.198466 | orchestrator | 2025-07-04 18:22:21 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:21.199186 | orchestrator | 2025-07-04 18:22:21 | INFO  | Task 518bdaeb-845c-4c85-8e3b-c17f03e1251f is in state SUCCESS 2025-07-04 18:22:21.199747 | orchestrator | 2025-07-04 18:22:21.199772 | orchestrator | None 2025-07-04 18:22:21.199783 | orchestrator | 2025-07-04 18:22:21.199795 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-07-04 18:22:21.199806 | orchestrator | 2025-07-04 18:22:21.199817 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-07-04 18:22:21.199828 | orchestrator | Friday 04 July 2025 18:20:25 +0000 (0:00:00.256) 0:00:00.256 *********** 2025-07-04 18:22:21.199840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-07-04 18:22:21.199852 | orchestrator | 2025-07-04 18:22:21.199863 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-07-04 18:22:21.199874 | orchestrator | Friday 04 July 2025 18:20:25 +0000 (0:00:00.227) 0:00:00.483 *********** 2025-07-04 18:22:21.199885 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-07-04 18:22:21.199896 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-07-04 18:22:21.199907 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-07-04 18:22:21.199919 | orchestrator | 2025-07-04 18:22:21.199998 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-07-04 18:22:21.200019 | orchestrator | Friday 04 July 2025 18:20:26 +0000 (0:00:01.251) 0:00:01.735 *********** 2025-07-04 18:22:21.200037 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-07-04 18:22:21.200058 | orchestrator | 2025-07-04 18:22:21.200078 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-07-04 18:22:21.200121 | orchestrator | Friday 04 July 2025 18:20:27 +0000 (0:00:01.171) 0:00:02.907 *********** 2025-07-04 18:22:21.200134 | orchestrator | changed: [testbed-manager] 2025-07-04 18:22:21.200144 | orchestrator | 2025-07-04 18:22:21.200155 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-07-04 18:22:21.200167 | orchestrator | Friday 04 July 2025 18:20:28 +0000 (0:00:01.007) 0:00:03.914 *********** 2025-07-04 18:22:21.200178 | orchestrator | changed: [testbed-manager] 2025-07-04 18:22:21.200189 | orchestrator | 2025-07-04 18:22:21.200199 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-07-04 18:22:21.200210 | orchestrator | Friday 04 July 2025 18:20:29 +0000 (0:00:00.912) 0:00:04.827 *********** 2025-07-04 18:22:21.200220 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-07-04 18:22:21.200231 | orchestrator | ok: [testbed-manager] 2025-07-04 18:22:21.200242 | orchestrator | 2025-07-04 18:22:21.200252 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-07-04 18:22:21.200437 | orchestrator | Friday 04 July 2025 18:21:11 +0000 (0:00:41.958) 0:00:46.786 *********** 2025-07-04 18:22:21.200454 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-07-04 18:22:21.200466 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-07-04 18:22:21.200477 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-07-04 18:22:21.200489 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-07-04 18:22:21.200500 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-07-04 18:22:21.200511 | orchestrator | 2025-07-04 18:22:21.200523 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-07-04 18:22:21.200534 | orchestrator | Friday 04 July 2025 18:21:15 +0000 (0:00:04.107) 0:00:50.893 *********** 2025-07-04 18:22:21.200545 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-07-04 18:22:21.200557 | orchestrator | 2025-07-04 18:22:21.200568 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-07-04 18:22:21.200580 | orchestrator | Friday 04 July 2025 18:21:16 +0000 (0:00:00.481) 0:00:51.375 *********** 2025-07-04 18:22:21.200591 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:22:21.200602 | orchestrator | 2025-07-04 18:22:21.200613 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-07-04 18:22:21.200625 | orchestrator | Friday 04 July 2025 18:21:16 +0000 (0:00:00.132) 0:00:51.507 *********** 2025-07-04 18:22:21.200636 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:22:21.200648 | orchestrator | 2025-07-04 18:22:21.200659 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-07-04 18:22:21.200670 | orchestrator | Friday 04 July 2025 18:21:16 +0000 (0:00:00.302) 0:00:51.810 *********** 2025-07-04 18:22:21.200681 | orchestrator | changed: [testbed-manager] 2025-07-04 18:22:21.200693 | orchestrator | 2025-07-04 18:22:21.200704 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-07-04 18:22:21.200728 | orchestrator | Friday 04 July 2025 18:21:18 +0000 (0:00:02.029) 0:00:53.840 *********** 2025-07-04 18:22:21.200740 | orchestrator | changed: [testbed-manager] 2025-07-04 18:22:21.200751 | orchestrator | 2025-07-04 18:22:21.200762 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-07-04 18:22:21.200774 | orchestrator | Friday 04 July 2025 18:21:19 +0000 (0:00:00.700) 0:00:54.540 *********** 2025-07-04 18:22:21.200785 | orchestrator | changed: [testbed-manager] 2025-07-04 18:22:21.200796 | orchestrator | 2025-07-04 18:22:21.200808 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-07-04 18:22:21.200819 | orchestrator | Friday 04 July 2025 18:21:20 +0000 (0:00:00.570) 0:00:55.111 *********** 2025-07-04 18:22:21.200830 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-07-04 18:22:21.200842 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-07-04 18:22:21.200853 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-07-04 18:22:21.200865 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-07-04 18:22:21.200886 | orchestrator | 2025-07-04 18:22:21.200898 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:22:21.200909 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:22:21.200921 | orchestrator | 2025-07-04 18:22:21.200961 | orchestrator | 2025-07-04 18:22:21.200988 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:22:21.201000 | orchestrator | Friday 04 July 2025 18:21:21 +0000 (0:00:01.365) 0:00:56.477 *********** 2025-07-04 18:22:21.201011 | orchestrator | =============================================================================== 2025-07-04 18:22:21.201021 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.96s 2025-07-04 18:22:21.201034 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.11s 2025-07-04 18:22:21.201053 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.03s 2025-07-04 18:22:21.201071 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.37s 2025-07-04 18:22:21.201092 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2025-07-04 18:22:21.201112 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.17s 2025-07-04 18:22:21.201131 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.01s 2025-07-04 18:22:21.201149 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2025-07-04 18:22:21.201161 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.70s 2025-07-04 18:22:21.201174 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2025-07-04 18:22:21.201187 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2025-07-04 18:22:21.201199 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2025-07-04 18:22:21.201209 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-07-04 18:22:21.201220 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-07-04 18:22:21.201231 | orchestrator | 2025-07-04 18:22:21.201241 | orchestrator | 2025-07-04 18:22:21.201252 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-07-04 18:22:21.201262 | orchestrator | 2025-07-04 18:22:21.201273 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-07-04 18:22:21.201284 | orchestrator | Friday 04 July 2025 18:21:03 +0000 (0:00:00.187) 0:00:00.187 *********** 2025-07-04 18:22:21.201294 | orchestrator | changed: [localhost] 2025-07-04 18:22:21.201305 | orchestrator | 2025-07-04 18:22:21.201316 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-07-04 18:22:21.201326 | orchestrator | Friday 04 July 2025 18:21:04 +0000 (0:00:01.085) 0:00:01.272 *********** 2025-07-04 18:22:21.201337 | orchestrator | changed: [localhost] 2025-07-04 18:22:21.201347 | orchestrator | 2025-07-04 18:22:21.201358 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-07-04 18:22:21.201369 | orchestrator | Friday 04 July 2025 18:22:15 +0000 (0:01:10.473) 0:01:11.746 *********** 2025-07-04 18:22:21.201379 | orchestrator | changed: [localhost] 2025-07-04 18:22:21.201390 | orchestrator | 2025-07-04 18:22:21.201401 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:22:21.201411 | orchestrator | 2025-07-04 18:22:21.201422 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:22:21.201433 | orchestrator | Friday 04 July 2025 18:22:19 +0000 (0:00:04.587) 0:01:16.333 *********** 2025-07-04 18:22:21.201443 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:22:21.201454 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:22:21.201465 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:22:21.201475 | orchestrator | 2025-07-04 18:22:21.201486 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:22:21.201496 | orchestrator | Friday 04 July 2025 18:22:19 +0000 (0:00:00.290) 0:01:16.624 *********** 2025-07-04 18:22:21.201515 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-07-04 18:22:21.201526 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-07-04 18:22:21.201537 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-07-04 18:22:21.201547 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-07-04 18:22:21.201558 | orchestrator | 2025-07-04 18:22:21.201568 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-07-04 18:22:21.201579 | orchestrator | skipping: no hosts matched 2025-07-04 18:22:21.201589 | orchestrator | 2025-07-04 18:22:21.201600 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:22:21.201611 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:22:21.201628 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:22:21.201640 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:22:21.201651 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:22:21.201662 | orchestrator | 2025-07-04 18:22:21.201672 | orchestrator | 2025-07-04 18:22:21.201683 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:22:21.201693 | orchestrator | Friday 04 July 2025 18:22:20 +0000 (0:00:00.708) 0:01:17.333 *********** 2025-07-04 18:22:21.201704 | orchestrator | =============================================================================== 2025-07-04 18:22:21.201715 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 70.48s 2025-07-04 18:22:21.201725 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.59s 2025-07-04 18:22:21.201736 | orchestrator | Ensure the destination directory exists --------------------------------- 1.09s 2025-07-04 18:22:21.201746 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-07-04 18:22:21.201764 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-07-04 18:22:21.201776 | orchestrator | 2025-07-04 18:22:21 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:21.201786 | orchestrator | 2025-07-04 18:22:21 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:24.233022 | orchestrator | 2025-07-04 18:22:24 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:24.233678 | orchestrator | 2025-07-04 18:22:24 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:24.236129 | orchestrator | 2025-07-04 18:22:24 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:24.237612 | orchestrator | 2025-07-04 18:22:24 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:22:24.241777 | orchestrator | 2025-07-04 18:22:24 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:24.241829 | orchestrator | 2025-07-04 18:22:24 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:27.289252 | orchestrator | 2025-07-04 18:22:27 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:27.290414 | orchestrator | 2025-07-04 18:22:27 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:27.290742 | orchestrator | 2025-07-04 18:22:27 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:27.291471 | orchestrator | 2025-07-04 18:22:27 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:22:27.293671 | orchestrator | 2025-07-04 18:22:27 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:27.293721 | orchestrator | 2025-07-04 18:22:27 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:30.321470 | orchestrator | 2025-07-04 18:22:30 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:30.322203 | orchestrator | 2025-07-04 18:22:30 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:30.322628 | orchestrator | 2025-07-04 18:22:30 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:30.323207 | orchestrator | 2025-07-04 18:22:30 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:22:30.324065 | orchestrator | 2025-07-04 18:22:30 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:30.324097 | orchestrator | 2025-07-04 18:22:30 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:33.358190 | orchestrator | 2025-07-04 18:22:33 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:33.358687 | orchestrator | 2025-07-04 18:22:33 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:33.359351 | orchestrator | 2025-07-04 18:22:33 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:33.360234 | orchestrator | 2025-07-04 18:22:33 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:22:33.362425 | orchestrator | 2025-07-04 18:22:33 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:33.362464 | orchestrator | 2025-07-04 18:22:33 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:36.405118 | orchestrator | 2025-07-04 18:22:36 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:36.405208 | orchestrator | 2025-07-04 18:22:36 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:36.405222 | orchestrator | 2025-07-04 18:22:36 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:36.405777 | orchestrator | 2025-07-04 18:22:36 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:22:36.406558 | orchestrator | 2025-07-04 18:22:36 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:36.406584 | orchestrator | 2025-07-04 18:22:36 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:39.438376 | orchestrator | 2025-07-04 18:22:39 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:39.439953 | orchestrator | 2025-07-04 18:22:39 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:39.440901 | orchestrator | 2025-07-04 18:22:39 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:39.441313 | orchestrator | 2025-07-04 18:22:39 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:22:39.442250 | orchestrator | 2025-07-04 18:22:39 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:39.442295 | orchestrator | 2025-07-04 18:22:39 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:42.481669 | orchestrator | 2025-07-04 18:22:42 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:42.482401 | orchestrator | 2025-07-04 18:22:42 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:42.484067 | orchestrator | 2025-07-04 18:22:42 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:42.485225 | orchestrator | 2025-07-04 18:22:42 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:22:42.486354 | orchestrator | 2025-07-04 18:22:42 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:42.486380 | orchestrator | 2025-07-04 18:22:42 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:45.522737 | orchestrator | 2025-07-04 18:22:45 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state STARTED 2025-07-04 18:22:45.523301 | orchestrator | 2025-07-04 18:22:45 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:45.524363 | orchestrator | 2025-07-04 18:22:45 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:45.525156 | orchestrator | 2025-07-04 18:22:45 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:22:45.526174 | orchestrator | 2025-07-04 18:22:45 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:45.526200 | orchestrator | 2025-07-04 18:22:45 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:48.569111 | orchestrator | 2025-07-04 18:22:48 | INFO  | Task e211f358-fa24-405e-8947-792acb0e8038 is in state SUCCESS 2025-07-04 18:22:48.569639 | orchestrator | 2025-07-04 18:22:48 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:48.570395 | orchestrator | 2025-07-04 18:22:48 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:48.571095 | orchestrator | 2025-07-04 18:22:48 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:22:48.571852 | orchestrator | 2025-07-04 18:22:48 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:48.571878 | orchestrator | 2025-07-04 18:22:48 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:51.610516 | orchestrator | 2025-07-04 18:22:51 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:51.616487 | orchestrator | 2025-07-04 18:22:51 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:51.616985 | orchestrator | 2025-07-04 18:22:51 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:22:51.617788 | orchestrator | 2025-07-04 18:22:51 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:51.617990 | orchestrator | 2025-07-04 18:22:51 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:54.648166 | orchestrator | 2025-07-04 18:22:54 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:54.648624 | orchestrator | 2025-07-04 18:22:54 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:54.649883 | orchestrator | 2025-07-04 18:22:54 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:22:54.650544 | orchestrator | 2025-07-04 18:22:54 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:54.650574 | orchestrator | 2025-07-04 18:22:54 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:22:57.675993 | orchestrator | 2025-07-04 18:22:57 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:22:57.676290 | orchestrator | 2025-07-04 18:22:57 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:22:57.678965 | orchestrator | 2025-07-04 18:22:57 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:22:57.679820 | orchestrator | 2025-07-04 18:22:57 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:22:57.679843 | orchestrator | 2025-07-04 18:22:57 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:00.718514 | orchestrator | 2025-07-04 18:23:00 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:00.720217 | orchestrator | 2025-07-04 18:23:00 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:23:00.723296 | orchestrator | 2025-07-04 18:23:00 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:00.724145 | orchestrator | 2025-07-04 18:23:00 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:00.724180 | orchestrator | 2025-07-04 18:23:00 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:03.767974 | orchestrator | 2025-07-04 18:23:03 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:03.768765 | orchestrator | 2025-07-04 18:23:03 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:23:03.769646 | orchestrator | 2025-07-04 18:23:03 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:03.770619 | orchestrator | 2025-07-04 18:23:03 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:03.770681 | orchestrator | 2025-07-04 18:23:03 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:06.800054 | orchestrator | 2025-07-04 18:23:06 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:06.800359 | orchestrator | 2025-07-04 18:23:06 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state STARTED 2025-07-04 18:23:06.801007 | orchestrator | 2025-07-04 18:23:06 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:06.801712 | orchestrator | 2025-07-04 18:23:06 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:06.801738 | orchestrator | 2025-07-04 18:23:06 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:09.841163 | orchestrator | 2025-07-04 18:23:09 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:09.841687 | orchestrator | 2025-07-04 18:23:09 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:09.843339 | orchestrator | 2025-07-04 18:23:09 | INFO  | Task 85760d6e-c462-4bb8-82fc-ad0fc30010f4 is in state SUCCESS 2025-07-04 18:23:09.844967 | orchestrator | 2025-07-04 18:23:09.845000 | orchestrator | 2025-07-04 18:23:09.845012 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-07-04 18:23:09.845024 | orchestrator | 2025-07-04 18:23:09.845035 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-07-04 18:23:09.845046 | orchestrator | Friday 04 July 2025 18:21:25 +0000 (0:00:00.251) 0:00:00.251 *********** 2025-07-04 18:23:09.845057 | orchestrator | changed: [testbed-manager] 2025-07-04 18:23:09.845069 | orchestrator | 2025-07-04 18:23:09.845080 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-07-04 18:23:09.845091 | orchestrator | Friday 04 July 2025 18:21:27 +0000 (0:00:02.168) 0:00:02.419 *********** 2025-07-04 18:23:09.845102 | orchestrator | changed: [testbed-manager] 2025-07-04 18:23:09.845113 | orchestrator | 2025-07-04 18:23:09.845124 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-07-04 18:23:09.845135 | orchestrator | Friday 04 July 2025 18:21:28 +0000 (0:00:01.002) 0:00:03.422 *********** 2025-07-04 18:23:09.845170 | orchestrator | changed: [testbed-manager] 2025-07-04 18:23:09.845182 | orchestrator | 2025-07-04 18:23:09.845193 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-07-04 18:23:09.845204 | orchestrator | Friday 04 July 2025 18:21:29 +0000 (0:00:00.982) 0:00:04.405 *********** 2025-07-04 18:23:09.845214 | orchestrator | changed: [testbed-manager] 2025-07-04 18:23:09.845225 | orchestrator | 2025-07-04 18:23:09.845248 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-07-04 18:23:09.845259 | orchestrator | Friday 04 July 2025 18:21:31 +0000 (0:00:01.337) 0:00:05.742 *********** 2025-07-04 18:23:09.845270 | orchestrator | changed: [testbed-manager] 2025-07-04 18:23:09.845281 | orchestrator | 2025-07-04 18:23:09.845291 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-07-04 18:23:09.845302 | orchestrator | Friday 04 July 2025 18:21:32 +0000 (0:00:00.999) 0:00:06.742 *********** 2025-07-04 18:23:09.845313 | orchestrator | changed: [testbed-manager] 2025-07-04 18:23:09.845324 | orchestrator | 2025-07-04 18:23:09.845334 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-07-04 18:23:09.845345 | orchestrator | Friday 04 July 2025 18:21:33 +0000 (0:00:00.951) 0:00:07.693 *********** 2025-07-04 18:23:09.845355 | orchestrator | changed: [testbed-manager] 2025-07-04 18:23:09.845366 | orchestrator | 2025-07-04 18:23:09.845377 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-07-04 18:23:09.845387 | orchestrator | Friday 04 July 2025 18:21:34 +0000 (0:00:01.193) 0:00:08.886 *********** 2025-07-04 18:23:09.845398 | orchestrator | changed: [testbed-manager] 2025-07-04 18:23:09.845408 | orchestrator | 2025-07-04 18:23:09.845419 | orchestrator | TASK [Create admin user] ******************************************************* 2025-07-04 18:23:09.845536 | orchestrator | Friday 04 July 2025 18:21:35 +0000 (0:00:01.081) 0:00:09.968 *********** 2025-07-04 18:23:09.845549 | orchestrator | changed: [testbed-manager] 2025-07-04 18:23:09.845561 | orchestrator | 2025-07-04 18:23:09.845573 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-07-04 18:23:09.845587 | orchestrator | Friday 04 July 2025 18:22:23 +0000 (0:00:48.363) 0:00:58.332 *********** 2025-07-04 18:23:09.845626 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:23:09.845639 | orchestrator | 2025-07-04 18:23:09.845651 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-04 18:23:09.845663 | orchestrator | 2025-07-04 18:23:09.845676 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-04 18:23:09.845688 | orchestrator | Friday 04 July 2025 18:22:23 +0000 (0:00:00.177) 0:00:58.509 *********** 2025-07-04 18:23:09.845701 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:23:09.845713 | orchestrator | 2025-07-04 18:23:09.845725 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-04 18:23:09.845737 | orchestrator | 2025-07-04 18:23:09.845750 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-04 18:23:09.845761 | orchestrator | Friday 04 July 2025 18:22:35 +0000 (0:00:11.563) 0:01:10.072 *********** 2025-07-04 18:23:09.845773 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:23:09.845786 | orchestrator | 2025-07-04 18:23:09.845798 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-04 18:23:09.845811 | orchestrator | 2025-07-04 18:23:09.845824 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-04 18:23:09.845837 | orchestrator | Friday 04 July 2025 18:22:36 +0000 (0:00:01.209) 0:01:11.282 *********** 2025-07-04 18:23:09.845848 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:23:09.845860 | orchestrator | 2025-07-04 18:23:09.845872 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:23:09.845886 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-04 18:23:09.845899 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:23:09.845937 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:23:09.845949 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:23:09.845961 | orchestrator | 2025-07-04 18:23:09.845972 | orchestrator | 2025-07-04 18:23:09.845984 | orchestrator | 2025-07-04 18:23:09.845996 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:23:09.846007 | orchestrator | Friday 04 July 2025 18:22:47 +0000 (0:00:11.233) 0:01:22.516 *********** 2025-07-04 18:23:09.846066 | orchestrator | =============================================================================== 2025-07-04 18:23:09.846080 | orchestrator | Create admin user ------------------------------------------------------ 48.36s 2025-07-04 18:23:09.846092 | orchestrator | Restart ceph manager service ------------------------------------------- 24.01s 2025-07-04 18:23:09.846118 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.17s 2025-07-04 18:23:09.846129 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.34s 2025-07-04 18:23:09.846141 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.19s 2025-07-04 18:23:09.846152 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.08s 2025-07-04 18:23:09.846164 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.00s 2025-07-04 18:23:09.846175 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.00s 2025-07-04 18:23:09.846187 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.98s 2025-07-04 18:23:09.846198 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.95s 2025-07-04 18:23:09.846210 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2025-07-04 18:23:09.846221 | orchestrator | 2025-07-04 18:23:09.846233 | orchestrator | 2025-07-04 18:23:09.846244 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:23:09.846256 | orchestrator | 2025-07-04 18:23:09.846267 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:23:09.846279 | orchestrator | Friday 04 July 2025 18:21:03 +0000 (0:00:00.333) 0:00:00.333 *********** 2025-07-04 18:23:09.846290 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:23:09.846309 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:23:09.846321 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:23:09.846333 | orchestrator | 2025-07-04 18:23:09.846344 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:23:09.846356 | orchestrator | Friday 04 July 2025 18:21:03 +0000 (0:00:00.341) 0:00:00.675 *********** 2025-07-04 18:23:09.846367 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-07-04 18:23:09.846380 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-07-04 18:23:09.846392 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-07-04 18:23:09.846403 | orchestrator | 2025-07-04 18:23:09.846415 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-07-04 18:23:09.846426 | orchestrator | 2025-07-04 18:23:09.846438 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-04 18:23:09.846449 | orchestrator | Friday 04 July 2025 18:21:04 +0000 (0:00:00.562) 0:00:01.237 *********** 2025-07-04 18:23:09.846461 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:23:09.846473 | orchestrator | 2025-07-04 18:23:09.846485 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-07-04 18:23:09.846496 | orchestrator | Friday 04 July 2025 18:21:04 +0000 (0:00:00.583) 0:00:01.820 *********** 2025-07-04 18:23:09.846508 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-07-04 18:23:09.846519 | orchestrator | 2025-07-04 18:23:09.846537 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-07-04 18:23:09.846549 | orchestrator | Friday 04 July 2025 18:21:08 +0000 (0:00:03.770) 0:00:05.591 *********** 2025-07-04 18:23:09.846560 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-07-04 18:23:09.846572 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-07-04 18:23:09.846584 | orchestrator | 2025-07-04 18:23:09.846595 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-07-04 18:23:09.846607 | orchestrator | Friday 04 July 2025 18:21:15 +0000 (0:00:06.423) 0:00:12.014 *********** 2025-07-04 18:23:09.846619 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-07-04 18:23:09.846630 | orchestrator | 2025-07-04 18:23:09.846641 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-07-04 18:23:09.846653 | orchestrator | Friday 04 July 2025 18:21:18 +0000 (0:00:03.420) 0:00:15.435 *********** 2025-07-04 18:23:09.846665 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-04 18:23:09.846676 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-07-04 18:23:09.846688 | orchestrator | 2025-07-04 18:23:09.846699 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-07-04 18:23:09.846711 | orchestrator | Friday 04 July 2025 18:21:22 +0000 (0:00:04.039) 0:00:19.475 *********** 2025-07-04 18:23:09.846722 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-04 18:23:09.846734 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-07-04 18:23:09.846745 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-07-04 18:23:09.846757 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-07-04 18:23:09.846769 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-07-04 18:23:09.846781 | orchestrator | 2025-07-04 18:23:09.846792 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-07-04 18:23:09.846804 | orchestrator | Friday 04 July 2025 18:21:40 +0000 (0:00:17.985) 0:00:37.460 *********** 2025-07-04 18:23:09.846815 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-07-04 18:23:09.846827 | orchestrator | 2025-07-04 18:23:09.846838 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-07-04 18:23:09.846850 | orchestrator | Friday 04 July 2025 18:21:45 +0000 (0:00:04.664) 0:00:42.125 *********** 2025-07-04 18:23:09.846874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.846893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.846943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.846956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.846969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.847030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.847046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.847071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.847083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.847121 | orchestrator | 2025-07-04 18:23:09.847134 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-07-04 18:23:09.847145 | orchestrator | Friday 04 July 2025 18:21:47 +0000 (0:00:02.407) 0:00:44.532 *********** 2025-07-04 18:23:09.847156 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-07-04 18:23:09.847166 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-07-04 18:23:09.847177 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-07-04 18:23:09.847188 | orchestrator | 2025-07-04 18:23:09.847199 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-07-04 18:23:09.847209 | orchestrator | Friday 04 July 2025 18:21:48 +0000 (0:00:01.268) 0:00:45.801 *********** 2025-07-04 18:23:09.847220 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:23:09.847231 | orchestrator | 2025-07-04 18:23:09.847242 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-07-04 18:23:09.847252 | orchestrator | Friday 04 July 2025 18:21:48 +0000 (0:00:00.113) 0:00:45.914 *********** 2025-07-04 18:23:09.847263 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:23:09.847274 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:23:09.847285 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:23:09.847295 | orchestrator | 2025-07-04 18:23:09.847306 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-04 18:23:09.847317 | orchestrator | Friday 04 July 2025 18:21:49 +0000 (0:00:00.460) 0:00:46.375 *********** 2025-07-04 18:23:09.847328 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:23:09.847339 | orchestrator | 2025-07-04 18:23:09.847349 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-07-04 18:23:09.847360 | orchestrator | Friday 04 July 2025 18:21:49 +0000 (0:00:00.494) 0:00:46.869 *********** 2025-07-04 18:23:09.847379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.847403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.847416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.847440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.847461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.847479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.847497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.847513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.847525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.847536 | orchestrator | 2025-07-04 18:23:09.847547 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-07-04 18:23:09.847558 | orchestrator | Friday 04 July 2025 18:21:53 +0000 (0:00:03.810) 0:00:50.680 *********** 2025-07-04 18:23:09.847570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-04 18:23:09.847582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.847600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.847618 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:23:09.847639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-04 18:23:09.847651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.847662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.847673 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:23:09.847685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-04 18:23:09.847696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.847720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.847731 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:23:09.847743 | orchestrator | 2025-07-04 18:23:09.847753 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-07-04 18:23:09.847764 | orchestrator | Friday 04 July 2025 18:21:55 +0000 (0:00:02.183) 0:00:52.863 *********** 2025-07-04 18:23:09.847780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-04 18:23:09.847791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.847803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.847814 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:23:09.847825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-04 18:23:09.847855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-04 18:23:09.847886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.847970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.847992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.848012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.848042 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:23:09.848054 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:23:09.848065 | orchestrator | 2025-07-04 18:23:09.848076 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-07-04 18:23:09.848087 | orchestrator | Friday 04 July 2025 18:21:56 +0000 (0:00:00.803) 0:00:53.667 *********** 2025-07-04 18:23:09.848108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.848126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.848138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.848149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848235 | orchestrator | 2025-07-04 18:23:09.848246 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-07-04 18:23:09.848257 | orchestrator | Friday 04 July 2025 18:22:00 +0000 (0:00:03.518) 0:00:57.185 *********** 2025-07-04 18:23:09.848268 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:23:09.848279 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:23:09.848290 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:23:09.848301 | orchestrator | 2025-07-04 18:23:09.848311 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-07-04 18:23:09.848322 | orchestrator | Friday 04 July 2025 18:22:02 +0000 (0:00:02.764) 0:00:59.950 *********** 2025-07-04 18:23:09.848338 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 18:23:09.848364 | orchestrator | 2025-07-04 18:23:09.848375 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-07-04 18:23:09.848386 | orchestrator | Friday 04 July 2025 18:22:04 +0000 (0:00:01.963) 0:01:01.913 *********** 2025-07-04 18:23:09.848396 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:23:09.848408 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:23:09.848418 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:23:09.848429 | orchestrator | 2025-07-04 18:23:09.848440 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-07-04 18:23:09.848450 | orchestrator | Friday 04 July 2025 18:22:05 +0000 (0:00:00.552) 0:01:02.466 *********** 2025-07-04 18:23:09.848462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.848481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.848498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.848510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848595 | orchestrator | 2025-07-04 18:23:09.848606 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-07-04 18:23:09.848617 | orchestrator | Friday 04 July 2025 18:22:15 +0000 (0:00:10.455) 0:01:12.921 *********** 2025-07-04 18:23:09.848629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-04 18:23:09.848647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.848659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.848670 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:23:09.848689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-04 18:23:09.848705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.848717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.848738 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:23:09.848749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-04 18:23:09.848761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.848777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:23:09.848789 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:23:09.848800 | orchestrator | 2025-07-04 18:23:09.848811 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-07-04 18:23:09.848822 | orchestrator | Friday 04 July 2025 18:22:17 +0000 (0:00:01.332) 0:01:14.254 *********** 2025-07-04 18:23:09.848838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.848850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.848872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:09.848944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.848993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.849024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.849045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:23:09.849066 | orchestrator | 2025-07-04 18:23:09.849079 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-04 18:23:09.849089 | orchestrator | Friday 04 July 2025 18:22:20 +0000 (0:00:03.116) 0:01:17.370 *********** 2025-07-04 18:23:09.849100 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:23:09.849111 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:23:09.849122 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:23:09.849133 | orchestrator | 2025-07-04 18:23:09.849143 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-07-04 18:23:09.849154 | orchestrator | Friday 04 July 2025 18:22:20 +0000 (0:00:00.461) 0:01:17.831 *********** 2025-07-04 18:23:09.849165 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:23:09.849175 | orchestrator | 2025-07-04 18:23:09.849186 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-07-04 18:23:09.849197 | orchestrator | Friday 04 July 2025 18:22:22 +0000 (0:00:01.916) 0:01:19.748 *********** 2025-07-04 18:23:09.849207 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:23:09.849218 | orchestrator | 2025-07-04 18:23:09.849228 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-07-04 18:23:09.849239 | orchestrator | Friday 04 July 2025 18:22:25 +0000 (0:00:02.339) 0:01:22.087 *********** 2025-07-04 18:23:09.849250 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:23:09.849260 | orchestrator | 2025-07-04 18:23:09.849271 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-04 18:23:09.849282 | orchestrator | Friday 04 July 2025 18:22:36 +0000 (0:00:11.739) 0:01:33.826 *********** 2025-07-04 18:23:09.849292 | orchestrator | 2025-07-04 18:23:09.849303 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-04 18:23:09.849314 | orchestrator | Friday 04 July 2025 18:22:36 +0000 (0:00:00.131) 0:01:33.958 *********** 2025-07-04 18:23:09.849325 | orchestrator | 2025-07-04 18:23:09.849343 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-04 18:23:09.849355 | orchestrator | Friday 04 July 2025 18:22:37 +0000 (0:00:00.094) 0:01:34.053 *********** 2025-07-04 18:23:09.849366 | orchestrator | 2025-07-04 18:23:09.849377 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-07-04 18:23:09.849387 | orchestrator | Friday 04 July 2025 18:22:37 +0000 (0:00:00.067) 0:01:34.120 *********** 2025-07-04 18:23:09.849398 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:23:09.849416 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:23:09.849426 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:23:09.849442 | orchestrator | 2025-07-04 18:23:09.849462 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-07-04 18:23:09.849481 | orchestrator | Friday 04 July 2025 18:22:43 +0000 (0:00:06.619) 0:01:40.739 *********** 2025-07-04 18:23:09.849496 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:23:09.849507 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:23:09.849518 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:23:09.849529 | orchestrator | 2025-07-04 18:23:09.849540 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-07-04 18:23:09.849550 | orchestrator | Friday 04 July 2025 18:22:55 +0000 (0:00:11.488) 0:01:52.228 *********** 2025-07-04 18:23:09.849561 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:23:09.849572 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:23:09.849587 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:23:09.849598 | orchestrator | 2025-07-04 18:23:09.849609 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:23:09.849620 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-04 18:23:09.849632 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-04 18:23:09.849643 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-04 18:23:09.849654 | orchestrator | 2025-07-04 18:23:09.849665 | orchestrator | 2025-07-04 18:23:09.849676 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:23:09.849686 | orchestrator | Friday 04 July 2025 18:23:06 +0000 (0:00:11.558) 0:02:03.787 *********** 2025-07-04 18:23:09.849697 | orchestrator | =============================================================================== 2025-07-04 18:23:09.849707 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.99s 2025-07-04 18:23:09.849718 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.74s 2025-07-04 18:23:09.849729 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.56s 2025-07-04 18:23:09.849739 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.49s 2025-07-04 18:23:09.849750 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.46s 2025-07-04 18:23:09.849760 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.62s 2025-07-04 18:23:09.849771 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.42s 2025-07-04 18:23:09.849781 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.66s 2025-07-04 18:23:09.849792 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.04s 2025-07-04 18:23:09.849802 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.81s 2025-07-04 18:23:09.849813 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.77s 2025-07-04 18:23:09.849824 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.52s 2025-07-04 18:23:09.849834 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.42s 2025-07-04 18:23:09.849845 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.12s 2025-07-04 18:23:09.849856 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.76s 2025-07-04 18:23:09.849866 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.41s 2025-07-04 18:23:09.849877 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.34s 2025-07-04 18:23:09.849887 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.18s 2025-07-04 18:23:09.849921 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.96s 2025-07-04 18:23:09.849933 | orchestrator | barbican : Creating barbican database ----------------------------------- 1.92s 2025-07-04 18:23:09.849944 | orchestrator | 2025-07-04 18:23:09 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:09.850087 | orchestrator | 2025-07-04 18:23:09 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:09.850105 | orchestrator | 2025-07-04 18:23:09 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:12.875788 | orchestrator | 2025-07-04 18:23:12 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:12.876053 | orchestrator | 2025-07-04 18:23:12 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:12.876962 | orchestrator | 2025-07-04 18:23:12 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:12.878078 | orchestrator | 2025-07-04 18:23:12 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:12.878126 | orchestrator | 2025-07-04 18:23:12 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:15.923211 | orchestrator | 2025-07-04 18:23:15 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:15.923348 | orchestrator | 2025-07-04 18:23:15 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:15.923858 | orchestrator | 2025-07-04 18:23:15 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:15.924990 | orchestrator | 2025-07-04 18:23:15 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:15.925043 | orchestrator | 2025-07-04 18:23:15 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:18.967375 | orchestrator | 2025-07-04 18:23:18 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:18.967576 | orchestrator | 2025-07-04 18:23:18 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:18.969195 | orchestrator | 2025-07-04 18:23:18 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:18.969792 | orchestrator | 2025-07-04 18:23:18 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:18.969835 | orchestrator | 2025-07-04 18:23:18 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:22.012080 | orchestrator | 2025-07-04 18:23:22 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:22.012177 | orchestrator | 2025-07-04 18:23:22 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:22.012700 | orchestrator | 2025-07-04 18:23:22 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:22.013337 | orchestrator | 2025-07-04 18:23:22 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:22.013360 | orchestrator | 2025-07-04 18:23:22 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:25.037999 | orchestrator | 2025-07-04 18:23:25 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:25.039314 | orchestrator | 2025-07-04 18:23:25 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:25.040234 | orchestrator | 2025-07-04 18:23:25 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:25.040770 | orchestrator | 2025-07-04 18:23:25 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:25.041647 | orchestrator | 2025-07-04 18:23:25 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:28.077818 | orchestrator | 2025-07-04 18:23:28 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:28.078890 | orchestrator | 2025-07-04 18:23:28 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:28.079790 | orchestrator | 2025-07-04 18:23:28 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:28.082287 | orchestrator | 2025-07-04 18:23:28 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:28.082363 | orchestrator | 2025-07-04 18:23:28 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:31.119698 | orchestrator | 2025-07-04 18:23:31 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:31.120233 | orchestrator | 2025-07-04 18:23:31 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:31.121049 | orchestrator | 2025-07-04 18:23:31 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:31.124119 | orchestrator | 2025-07-04 18:23:31 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:31.124152 | orchestrator | 2025-07-04 18:23:31 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:34.154686 | orchestrator | 2025-07-04 18:23:34 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:34.155778 | orchestrator | 2025-07-04 18:23:34 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:34.156690 | orchestrator | 2025-07-04 18:23:34 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:34.157868 | orchestrator | 2025-07-04 18:23:34 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:34.157937 | orchestrator | 2025-07-04 18:23:34 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:37.189977 | orchestrator | 2025-07-04 18:23:37 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:37.190861 | orchestrator | 2025-07-04 18:23:37 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:37.191881 | orchestrator | 2025-07-04 18:23:37 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:37.193057 | orchestrator | 2025-07-04 18:23:37 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:37.193069 | orchestrator | 2025-07-04 18:23:37 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:40.241631 | orchestrator | 2025-07-04 18:23:40 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:40.248554 | orchestrator | 2025-07-04 18:23:40 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:40.250741 | orchestrator | 2025-07-04 18:23:40 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state STARTED 2025-07-04 18:23:40.256700 | orchestrator | 2025-07-04 18:23:40 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:40.256766 | orchestrator | 2025-07-04 18:23:40 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:43.302239 | orchestrator | 2025-07-04 18:23:43 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:43.303001 | orchestrator | 2025-07-04 18:23:43 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:43.303722 | orchestrator | 2025-07-04 18:23:43 | INFO  | Task 886bbda2-8071-4195-bb4c-58f8a025d7c6 is in state STARTED 2025-07-04 18:23:43.305746 | orchestrator | 2025-07-04 18:23:43 | INFO  | Task 8158723b-7087-49df-9b05-6975880eb14d is in state SUCCESS 2025-07-04 18:23:43.307110 | orchestrator | 2025-07-04 18:23:43.307208 | orchestrator | 2025-07-04 18:23:43.307224 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:23:43.307237 | orchestrator | 2025-07-04 18:23:43.307248 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:23:43.307259 | orchestrator | Friday 04 July 2025 18:22:27 +0000 (0:00:00.412) 0:00:00.412 *********** 2025-07-04 18:23:43.307271 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:23:43.307282 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:23:43.307293 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:23:43.307303 | orchestrator | 2025-07-04 18:23:43.307314 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:23:43.307325 | orchestrator | Friday 04 July 2025 18:22:27 +0000 (0:00:00.253) 0:00:00.665 *********** 2025-07-04 18:23:43.307336 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-07-04 18:23:43.307347 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-07-04 18:23:43.307357 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-07-04 18:23:43.307368 | orchestrator | 2025-07-04 18:23:43.307379 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-07-04 18:23:43.307390 | orchestrator | 2025-07-04 18:23:43.307400 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-04 18:23:43.307411 | orchestrator | Friday 04 July 2025 18:22:28 +0000 (0:00:00.566) 0:00:01.232 *********** 2025-07-04 18:23:43.307422 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:23:43.307433 | orchestrator | 2025-07-04 18:23:43.307444 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-07-04 18:23:43.307455 | orchestrator | Friday 04 July 2025 18:22:29 +0000 (0:00:00.626) 0:00:01.859 *********** 2025-07-04 18:23:43.307465 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-07-04 18:23:43.307476 | orchestrator | 2025-07-04 18:23:43.307487 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-07-04 18:23:43.307498 | orchestrator | Friday 04 July 2025 18:22:32 +0000 (0:00:03.158) 0:00:05.017 *********** 2025-07-04 18:23:43.307508 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-07-04 18:23:43.307519 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-07-04 18:23:43.307530 | orchestrator | 2025-07-04 18:23:43.307541 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-07-04 18:23:43.307552 | orchestrator | Friday 04 July 2025 18:22:38 +0000 (0:00:06.195) 0:00:11.212 *********** 2025-07-04 18:23:43.307562 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-04 18:23:43.307573 | orchestrator | 2025-07-04 18:23:43.307584 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-07-04 18:23:43.307595 | orchestrator | Friday 04 July 2025 18:22:41 +0000 (0:00:03.194) 0:00:14.407 *********** 2025-07-04 18:23:43.307605 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-04 18:23:43.307616 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-07-04 18:23:43.307627 | orchestrator | 2025-07-04 18:23:43.307638 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-07-04 18:23:43.307668 | orchestrator | Friday 04 July 2025 18:22:45 +0000 (0:00:03.947) 0:00:18.355 *********** 2025-07-04 18:23:43.307683 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-04 18:23:43.307696 | orchestrator | 2025-07-04 18:23:43.307708 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-07-04 18:23:43.307722 | orchestrator | Friday 04 July 2025 18:22:49 +0000 (0:00:03.547) 0:00:21.903 *********** 2025-07-04 18:23:43.307758 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-07-04 18:23:43.307771 | orchestrator | 2025-07-04 18:23:43.307784 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-04 18:23:43.307797 | orchestrator | Friday 04 July 2025 18:22:53 +0000 (0:00:04.460) 0:00:26.364 *********** 2025-07-04 18:23:43.307810 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:23:43.307822 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:23:43.307834 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:23:43.307846 | orchestrator | 2025-07-04 18:23:43.307858 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-07-04 18:23:43.307871 | orchestrator | Friday 04 July 2025 18:22:54 +0000 (0:00:00.542) 0:00:26.906 *********** 2025-07-04 18:23:43.307923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.307962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.307976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.307988 | orchestrator | 2025-07-04 18:23:43.308000 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-07-04 18:23:43.308012 | orchestrator | Friday 04 July 2025 18:22:55 +0000 (0:00:01.449) 0:00:28.356 *********** 2025-07-04 18:23:43.308037 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:23:43.308049 | orchestrator | 2025-07-04 18:23:43.308060 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-07-04 18:23:43.308080 | orchestrator | Friday 04 July 2025 18:22:55 +0000 (0:00:00.260) 0:00:28.617 *********** 2025-07-04 18:23:43.308091 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:23:43.308116 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:23:43.308127 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:23:43.308139 | orchestrator | 2025-07-04 18:23:43.308150 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-04 18:23:43.308160 | orchestrator | Friday 04 July 2025 18:22:56 +0000 (0:00:00.791) 0:00:29.408 *********** 2025-07-04 18:23:43.308172 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:23:43.308183 | orchestrator | 2025-07-04 18:23:43.308194 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-07-04 18:23:43.308205 | orchestrator | Friday 04 July 2025 18:22:57 +0000 (0:00:01.173) 0:00:30.581 *********** 2025-07-04 18:23:43.308222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.308244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.308257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.308268 | orchestrator | 2025-07-04 18:23:43.308279 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-07-04 18:23:43.308290 | orchestrator | Friday 04 July 2025 18:22:59 +0000 (0:00:01.639) 0:00:32.221 *********** 2025-07-04 18:23:43.308308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-04 18:23:43.308320 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:23:43.308336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-04 18:23:43.308348 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:23:43.308367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-04 18:23:43.308379 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:23:43.308390 | orchestrator | 2025-07-04 18:23:43.308401 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-07-04 18:23:43.308413 | orchestrator | Friday 04 July 2025 18:23:00 +0000 (0:00:01.090) 0:00:33.311 *********** 2025-07-04 18:23:43.308424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-04 18:23:43.308443 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:23:43.308455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-04 18:23:43.308467 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:23:43.308478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-04 18:23:43.308489 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:23:43.308501 | orchestrator | 2025-07-04 18:23:43.308517 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-07-04 18:23:43.308529 | orchestrator | Friday 04 July 2025 18:23:01 +0000 (0:00:01.021) 0:00:34.333 *********** 2025-07-04 18:23:43.308548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.308561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.308579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.308591 | orchestrator | 2025-07-04 18:23:43.308602 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-07-04 18:23:43.308614 | orchestrator | Friday 04 July 2025 18:23:03 +0000 (0:00:01.531) 0:00:35.864 *********** 2025-07-04 18:23:43.308625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.308642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.308661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.308680 | orchestrator | 2025-07-04 18:23:43.308691 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-07-04 18:23:43.308702 | orchestrator | Friday 04 July 2025 18:23:06 +0000 (0:00:02.906) 0:00:38.771 *********** 2025-07-04 18:23:43.308726 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-04 18:23:43.308749 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-04 18:23:43.308762 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-04 18:23:43.308773 | orchestrator | 2025-07-04 18:23:43.308784 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-07-04 18:23:43.308796 | orchestrator | Friday 04 July 2025 18:23:08 +0000 (0:00:02.396) 0:00:41.168 *********** 2025-07-04 18:23:43.308807 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:23:43.308817 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:23:43.308829 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:23:43.308840 | orchestrator | 2025-07-04 18:23:43.308851 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-07-04 18:23:43.308863 | orchestrator | Friday 04 July 2025 18:23:10 +0000 (0:00:02.052) 0:00:43.220 *********** 2025-07-04 18:23:43.308875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-04 18:23:43.308939 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:23:43.308961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-04 18:23:43.308973 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:23:43.308995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-04 18:23:43.309014 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:23:43.309025 | orchestrator | 2025-07-04 18:23:43.309036 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-07-04 18:23:43.309047 | orchestrator | Friday 04 July 2025 18:23:11 +0000 (0:00:00.951) 0:00:44.172 *********** 2025-07-04 18:23:43.309059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.309069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.309089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-04 18:23:43.309100 | orchestrator | 2025-07-04 18:23:43.309110 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-07-04 18:23:43.309120 | orchestrator | Friday 04 July 2025 18:23:13 +0000 (0:00:01.756) 0:00:45.929 *********** 2025-07-04 18:23:43.309130 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:23:43.309140 | orchestrator | 2025-07-04 18:23:43.309149 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-07-04 18:23:43.309159 | orchestrator | Friday 04 July 2025 18:23:15 +0000 (0:00:01.899) 0:00:47.828 *********** 2025-07-04 18:23:43.309176 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:23:43.309198 | orchestrator | 2025-07-04 18:23:43.309218 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-07-04 18:23:43.309228 | orchestrator | Friday 04 July 2025 18:23:17 +0000 (0:00:02.401) 0:00:50.230 *********** 2025-07-04 18:23:43.309246 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:23:43.309256 | orchestrator | 2025-07-04 18:23:43.309266 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-04 18:23:43.309276 | orchestrator | Friday 04 July 2025 18:23:31 +0000 (0:00:13.550) 0:01:03.780 *********** 2025-07-04 18:23:43.309285 | orchestrator | 2025-07-04 18:23:43.309295 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-04 18:23:43.309305 | orchestrator | Friday 04 July 2025 18:23:31 +0000 (0:00:00.126) 0:01:03.907 *********** 2025-07-04 18:23:43.309314 | orchestrator | 2025-07-04 18:23:43.309324 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-04 18:23:43.309334 | orchestrator | Friday 04 July 2025 18:23:31 +0000 (0:00:00.120) 0:01:04.028 *********** 2025-07-04 18:23:43.309344 | orchestrator | 2025-07-04 18:23:43.309353 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-07-04 18:23:43.309363 | orchestrator | Friday 04 July 2025 18:23:31 +0000 (0:00:00.121) 0:01:04.150 *********** 2025-07-04 18:23:43.309373 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:23:43.309384 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:23:43.309394 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:23:43.309404 | orchestrator | 2025-07-04 18:23:43.309414 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:23:43.309426 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-04 18:23:43.309437 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 18:23:43.309447 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 18:23:43.309457 | orchestrator | 2025-07-04 18:23:43.309467 | orchestrator | 2025-07-04 18:23:43.309476 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:23:43.309486 | orchestrator | Friday 04 July 2025 18:23:41 +0000 (0:00:10.098) 0:01:14.248 *********** 2025-07-04 18:23:43.309496 | orchestrator | =============================================================================== 2025-07-04 18:23:43.309506 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.55s 2025-07-04 18:23:43.309516 | orchestrator | placement : Restart placement-api container ---------------------------- 10.10s 2025-07-04 18:23:43.309526 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.20s 2025-07-04 18:23:43.309536 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.46s 2025-07-04 18:23:43.309545 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.95s 2025-07-04 18:23:43.309555 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.55s 2025-07-04 18:23:43.309565 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.19s 2025-07-04 18:23:43.309575 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.16s 2025-07-04 18:23:43.309585 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.91s 2025-07-04 18:23:43.309595 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.40s 2025-07-04 18:23:43.309604 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.40s 2025-07-04 18:23:43.309614 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.05s 2025-07-04 18:23:43.309624 | orchestrator | placement : Creating placement databases -------------------------------- 1.90s 2025-07-04 18:23:43.309641 | orchestrator | placement : Check placement containers ---------------------------------- 1.76s 2025-07-04 18:23:43.309651 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.64s 2025-07-04 18:23:43.309660 | orchestrator | placement : Copying over config.json files for services ----------------- 1.53s 2025-07-04 18:23:43.309670 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.45s 2025-07-04 18:23:43.309680 | orchestrator | placement : include_tasks ----------------------------------------------- 1.17s 2025-07-04 18:23:43.309691 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.09s 2025-07-04 18:23:43.309708 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.02s 2025-07-04 18:23:43.309728 | orchestrator | 2025-07-04 18:23:43 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:43.309836 | orchestrator | 2025-07-04 18:23:43 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:46.345008 | orchestrator | 2025-07-04 18:23:46 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:46.346166 | orchestrator | 2025-07-04 18:23:46 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:46.346944 | orchestrator | 2025-07-04 18:23:46 | INFO  | Task 886bbda2-8071-4195-bb4c-58f8a025d7c6 is in state STARTED 2025-07-04 18:23:46.349305 | orchestrator | 2025-07-04 18:23:46 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:46.349387 | orchestrator | 2025-07-04 18:23:46 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:49.389392 | orchestrator | 2025-07-04 18:23:49 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:49.391954 | orchestrator | 2025-07-04 18:23:49 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:49.392848 | orchestrator | 2025-07-04 18:23:49 | INFO  | Task 886bbda2-8071-4195-bb4c-58f8a025d7c6 is in state SUCCESS 2025-07-04 18:23:49.394434 | orchestrator | 2025-07-04 18:23:49 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:49.394674 | orchestrator | 2025-07-04 18:23:49 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:52.452381 | orchestrator | 2025-07-04 18:23:52 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:52.454229 | orchestrator | 2025-07-04 18:23:52 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:52.456190 | orchestrator | 2025-07-04 18:23:52 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:52.458156 | orchestrator | 2025-07-04 18:23:52 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:23:52.458443 | orchestrator | 2025-07-04 18:23:52 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:55.506337 | orchestrator | 2025-07-04 18:23:55 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:55.506432 | orchestrator | 2025-07-04 18:23:55 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:55.507103 | orchestrator | 2025-07-04 18:23:55 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:55.508041 | orchestrator | 2025-07-04 18:23:55 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:23:55.508071 | orchestrator | 2025-07-04 18:23:55 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:23:58.546356 | orchestrator | 2025-07-04 18:23:58 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:23:58.547978 | orchestrator | 2025-07-04 18:23:58 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:23:58.549515 | orchestrator | 2025-07-04 18:23:58 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:23:58.551406 | orchestrator | 2025-07-04 18:23:58 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:23:58.552105 | orchestrator | 2025-07-04 18:23:58 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:01.604262 | orchestrator | 2025-07-04 18:24:01 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:01.605645 | orchestrator | 2025-07-04 18:24:01 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state STARTED 2025-07-04 18:24:01.608166 | orchestrator | 2025-07-04 18:24:01 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:01.608588 | orchestrator | 2025-07-04 18:24:01 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:01.608615 | orchestrator | 2025-07-04 18:24:01 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:04.652966 | orchestrator | 2025-07-04 18:24:04 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:04.653082 | orchestrator | 2025-07-04 18:24:04 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:04.659320 | orchestrator | 2025-07-04 18:24:04.659408 | orchestrator | 2025-07-04 18:24:04.659469 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:24:04.659484 | orchestrator | 2025-07-04 18:24:04.659495 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:24:04.659506 | orchestrator | Friday 04 July 2025 18:23:47 +0000 (0:00:00.246) 0:00:00.246 *********** 2025-07-04 18:24:04.659531 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:24:04.659543 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:24:04.659554 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:24:04.659564 | orchestrator | 2025-07-04 18:24:04.659575 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:24:04.659586 | orchestrator | Friday 04 July 2025 18:23:47 +0000 (0:00:00.328) 0:00:00.575 *********** 2025-07-04 18:24:04.659597 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-04 18:24:04.659608 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-04 18:24:04.659619 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-04 18:24:04.659629 | orchestrator | 2025-07-04 18:24:04.659640 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-07-04 18:24:04.659651 | orchestrator | 2025-07-04 18:24:04.659661 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-07-04 18:24:04.659691 | orchestrator | Friday 04 July 2025 18:23:48 +0000 (0:00:00.594) 0:00:01.169 *********** 2025-07-04 18:24:04.659702 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:24:04.659713 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:24:04.659724 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:24:04.659734 | orchestrator | 2025-07-04 18:24:04.659745 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:24:04.659757 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:24:04.659769 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:24:04.659780 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:24:04.659791 | orchestrator | 2025-07-04 18:24:04.659802 | orchestrator | 2025-07-04 18:24:04.659812 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:24:04.659934 | orchestrator | Friday 04 July 2025 18:23:48 +0000 (0:00:00.679) 0:00:01.849 *********** 2025-07-04 18:24:04.659949 | orchestrator | =============================================================================== 2025-07-04 18:24:04.659962 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.68s 2025-07-04 18:24:04.659974 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-07-04 18:24:04.660018 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-07-04 18:24:04.660031 | orchestrator | 2025-07-04 18:24:04.660043 | orchestrator | 2025-07-04 18:24:04.660055 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:24:04.660067 | orchestrator | 2025-07-04 18:24:04.660080 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:24:04.660126 | orchestrator | Friday 04 July 2025 18:21:03 +0000 (0:00:00.276) 0:00:00.276 *********** 2025-07-04 18:24:04.660139 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:24:04.660152 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:24:04.660164 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:24:04.660199 | orchestrator | 2025-07-04 18:24:04.660213 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:24:04.660258 | orchestrator | Friday 04 July 2025 18:21:03 +0000 (0:00:00.326) 0:00:00.603 *********** 2025-07-04 18:24:04.660271 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-07-04 18:24:04.660296 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-07-04 18:24:04.660308 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-07-04 18:24:04.660319 | orchestrator | 2025-07-04 18:24:04.660329 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-07-04 18:24:04.660340 | orchestrator | 2025-07-04 18:24:04.660351 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-04 18:24:04.660361 | orchestrator | Friday 04 July 2025 18:21:04 +0000 (0:00:00.474) 0:00:01.078 *********** 2025-07-04 18:24:04.660372 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:24:04.660383 | orchestrator | 2025-07-04 18:24:04.660394 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-07-04 18:24:04.660404 | orchestrator | Friday 04 July 2025 18:21:04 +0000 (0:00:00.582) 0:00:01.660 *********** 2025-07-04 18:24:04.660415 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-07-04 18:24:04.660425 | orchestrator | 2025-07-04 18:24:04.660436 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-07-04 18:24:04.660447 | orchestrator | Friday 04 July 2025 18:21:08 +0000 (0:00:03.644) 0:00:05.305 *********** 2025-07-04 18:24:04.660458 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-07-04 18:24:04.660469 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-07-04 18:24:04.660479 | orchestrator | 2025-07-04 18:24:04.660490 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-07-04 18:24:04.660500 | orchestrator | Friday 04 July 2025 18:21:15 +0000 (0:00:06.898) 0:00:12.203 *********** 2025-07-04 18:24:04.660511 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-04 18:24:04.660522 | orchestrator | 2025-07-04 18:24:04.660532 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-07-04 18:24:04.660543 | orchestrator | Friday 04 July 2025 18:21:18 +0000 (0:00:03.316) 0:00:15.520 *********** 2025-07-04 18:24:04.660584 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-04 18:24:04.660596 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-07-04 18:24:04.660607 | orchestrator | 2025-07-04 18:24:04.660617 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-07-04 18:24:04.660628 | orchestrator | Friday 04 July 2025 18:21:22 +0000 (0:00:03.911) 0:00:19.431 *********** 2025-07-04 18:24:04.660666 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-04 18:24:04.660678 | orchestrator | 2025-07-04 18:24:04.660689 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-07-04 18:24:04.660751 | orchestrator | Friday 04 July 2025 18:21:25 +0000 (0:00:03.328) 0:00:22.759 *********** 2025-07-04 18:24:04.660762 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-07-04 18:24:04.660773 | orchestrator | 2025-07-04 18:24:04.660784 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-07-04 18:24:04.660794 | orchestrator | Friday 04 July 2025 18:21:29 +0000 (0:00:04.015) 0:00:26.775 *********** 2025-07-04 18:24:04.660808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.660825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.660837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.660849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.660896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.660910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.660922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.660933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.660944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.660955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.660975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.660997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661097 | orchestrator | 2025-07-04 18:24:04.661107 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-07-04 18:24:04.661118 | orchestrator | Friday 04 July 2025 18:21:33 +0000 (0:00:03.176) 0:00:29.951 *********** 2025-07-04 18:24:04.661129 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:24:04.661140 | orchestrator | 2025-07-04 18:24:04.661150 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-07-04 18:24:04.661161 | orchestrator | Friday 04 July 2025 18:21:33 +0000 (0:00:00.111) 0:00:30.063 *********** 2025-07-04 18:24:04.661171 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:24:04.661182 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:24:04.661193 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:24:04.661203 | orchestrator | 2025-07-04 18:24:04.661214 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-04 18:24:04.661224 | orchestrator | Friday 04 July 2025 18:21:33 +0000 (0:00:00.254) 0:00:30.317 *********** 2025-07-04 18:24:04.661235 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:24:04.661245 | orchestrator | 2025-07-04 18:24:04.661256 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-07-04 18:24:04.661267 | orchestrator | Friday 04 July 2025 18:21:34 +0000 (0:00:00.622) 0:00:30.940 *********** 2025-07-04 18:24:04.661278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.661290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.661307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.661329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.661541 | orchestrator | 2025-07-04 18:24:04.661552 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-07-04 18:24:04.661563 | orchestrator | Friday 04 July 2025 18:21:40 +0000 (0:00:06.037) 0:00:36.977 *********** 2025-07-04 18:24:04.661574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.661586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:24:04.661603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.661614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.661631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-prod2025-07-04 18:24:04 | INFO  | Task b8d0347d-dad2-42ee-bd67-6689c5f70861 is in state SUCCESS 2025-07-04 18:24:04.662369 | orchestrator | ucer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.662484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.662507 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:24:04.662538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.662549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:24:04.662571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.662583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.662664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.662681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.662692 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:24:04.662704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.662722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:24:04.662734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.662746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.662794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.662808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.662819 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:24:04.662830 | orchestrator | 2025-07-04 18:24:04.662841 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-07-04 18:24:04.662852 | orchestrator | Friday 04 July 2025 18:21:41 +0000 (0:00:01.766) 0:00:38.744 *********** 2025-07-04 18:24:04.662864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.662949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:24:04.662964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.662978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.663031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.663045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.663058 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:24:04.663072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.663091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:24:04.663104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.663117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.663164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.663178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.663191 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:24:04.663204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.663222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:24:04.663235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.663248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.663296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.663310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.663322 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:24:04.663333 | orchestrator | 2025-07-04 18:24:04.663343 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-07-04 18:24:04.663360 | orchestrator | Friday 04 July 2025 18:21:43 +0000 (0:00:01.347) 0:00:40.092 *********** 2025-07-04 18:24:04.663372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.663384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.663395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.663449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663689 | orchestrator | 2025-07-04 18:24:04.663699 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-07-04 18:24:04.663709 | orchestrator | Friday 04 July 2025 18:21:49 +0000 (0:00:06.729) 0:00:46.822 *********** 2025-07-04 18:24:04.663719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.663729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.663740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.663754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.663965 | orchestrator | 2025-07-04 18:24:04.663975 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-07-04 18:24:04.663985 | orchestrator | Friday 04 July 2025 18:22:09 +0000 (0:00:19.244) 0:01:06.066 *********** 2025-07-04 18:24:04.663994 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-04 18:24:04.664004 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-04 18:24:04.664013 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-04 18:24:04.664022 | orchestrator | 2025-07-04 18:24:04.664032 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-07-04 18:24:04.664042 | orchestrator | Friday 04 July 2025 18:22:16 +0000 (0:00:07.022) 0:01:13.088 *********** 2025-07-04 18:24:04.664051 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-04 18:24:04.664060 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-04 18:24:04.664070 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-04 18:24:04.664079 | orchestrator | 2025-07-04 18:24:04.664088 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-07-04 18:24:04.664098 | orchestrator | Friday 04 July 2025 18:22:19 +0000 (0:00:03.671) 0:01:16.760 *********** 2025-07-04 18:24:04.664108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.664118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.664134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.664153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.664164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.664174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.664260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.664317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.664327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.664336 | orchestrator | 2025-07-04 18:24:04.664346 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-07-04 18:24:04.664356 | orchestrator | Friday 04 July 2025 18:22:22 +0000 (0:00:02.689) 0:01:19.450 *********** 2025-07-04 18:24:04.664366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.664376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.664391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.664411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.664422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.664467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.664517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.664572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.664583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.664593 | orchestrator | 2025-07-04 18:24:04.664603 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-04 18:24:04.664612 | orchestrator | Friday 04 July 2025 18:22:26 +0000 (0:00:03.486) 0:01:22.937 *********** 2025-07-04 18:24:04.664622 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:24:04.664632 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:24:04.664642 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:24:04.664651 | orchestrator | 2025-07-04 18:24:04.664660 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-07-04 18:24:04.664670 | orchestrator | Friday 04 July 2025 18:22:26 +0000 (0:00:00.799) 0:01:23.737 *********** 2025-07-04 18:24:04.664680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.664695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:24:04.664705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.664745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:24:04.664770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664789 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:24:04.664804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664839 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:24:04.664849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-04 18:24:04.664864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-04 18:24:04.664888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-04 18:24:04.664938 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:24:04.664948 | orchestrator | 2025-07-04 18:24:04.664958 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-07-04 18:24:04.664973 | orchestrator | Friday 04 July 2025 18:22:27 +0000 (0:00:00.912) 0:01:24.649 *********** 2025-07-04 18:24:04.664983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.664993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.665008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-04 18:24:04.665022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-04 18:24:04.665244 | orchestrator | 2025-07-04 18:24:04.665260 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-04 18:24:04.665277 | orchestrator | Friday 04 July 2025 18:22:31 +0000 (0:00:04.061) 0:01:28.711 *********** 2025-07-04 18:24:04.665293 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:24:04.665309 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:24:04.665325 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:24:04.665341 | orchestrator | 2025-07-04 18:24:04.665360 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-07-04 18:24:04.665377 | orchestrator | Friday 04 July 2025 18:22:32 +0000 (0:00:00.290) 0:01:29.001 *********** 2025-07-04 18:24:04.665392 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-07-04 18:24:04.665409 | orchestrator | 2025-07-04 18:24:04.665425 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-07-04 18:24:04.665441 | orchestrator | Friday 04 July 2025 18:22:34 +0000 (0:00:02.323) 0:01:31.325 *********** 2025-07-04 18:24:04.665452 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-04 18:24:04.665462 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-07-04 18:24:04.665472 | orchestrator | 2025-07-04 18:24:04.665481 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-07-04 18:24:04.665490 | orchestrator | Friday 04 July 2025 18:22:36 +0000 (0:00:02.205) 0:01:33.530 *********** 2025-07-04 18:24:04.665500 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:24:04.665509 | orchestrator | 2025-07-04 18:24:04.665519 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-04 18:24:04.665528 | orchestrator | Friday 04 July 2025 18:22:51 +0000 (0:00:14.971) 0:01:48.501 *********** 2025-07-04 18:24:04.665537 | orchestrator | 2025-07-04 18:24:04.665547 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-04 18:24:04.665556 | orchestrator | Friday 04 July 2025 18:22:51 +0000 (0:00:00.076) 0:01:48.578 *********** 2025-07-04 18:24:04.665565 | orchestrator | 2025-07-04 18:24:04.665575 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-04 18:24:04.665584 | orchestrator | Friday 04 July 2025 18:22:51 +0000 (0:00:00.096) 0:01:48.675 *********** 2025-07-04 18:24:04.665593 | orchestrator | 2025-07-04 18:24:04.665603 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-07-04 18:24:04.665612 | orchestrator | Friday 04 July 2025 18:22:51 +0000 (0:00:00.062) 0:01:48.737 *********** 2025-07-04 18:24:04.665622 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:24:04.665631 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:24:04.665640 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:24:04.665650 | orchestrator | 2025-07-04 18:24:04.665659 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-07-04 18:24:04.665668 | orchestrator | Friday 04 July 2025 18:23:06 +0000 (0:00:14.438) 0:02:03.175 *********** 2025-07-04 18:24:04.665678 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:24:04.665687 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:24:04.665696 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:24:04.665706 | orchestrator | 2025-07-04 18:24:04.665715 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-07-04 18:24:04.665724 | orchestrator | Friday 04 July 2025 18:23:15 +0000 (0:00:09.112) 0:02:12.288 *********** 2025-07-04 18:24:04.665734 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:24:04.665743 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:24:04.665752 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:24:04.665762 | orchestrator | 2025-07-04 18:24:04.665771 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-07-04 18:24:04.665788 | orchestrator | Friday 04 July 2025 18:23:22 +0000 (0:00:07.519) 0:02:19.807 *********** 2025-07-04 18:24:04.665797 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:24:04.665807 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:24:04.665816 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:24:04.665826 | orchestrator | 2025-07-04 18:24:04.665835 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-07-04 18:24:04.665844 | orchestrator | Friday 04 July 2025 18:23:30 +0000 (0:00:07.232) 0:02:27.040 *********** 2025-07-04 18:24:04.665854 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:24:04.665863 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:24:04.665900 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:24:04.665910 | orchestrator | 2025-07-04 18:24:04.665920 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-07-04 18:24:04.665937 | orchestrator | Friday 04 July 2025 18:23:43 +0000 (0:00:13.141) 0:02:40.182 *********** 2025-07-04 18:24:04.665946 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:24:04.665956 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:24:04.665965 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:24:04.665975 | orchestrator | 2025-07-04 18:24:04.665989 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-07-04 18:24:04.665999 | orchestrator | Friday 04 July 2025 18:23:55 +0000 (0:00:11.746) 0:02:51.928 *********** 2025-07-04 18:24:04.666008 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:24:04.666062 | orchestrator | 2025-07-04 18:24:04.666074 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:24:04.666084 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-04 18:24:04.666095 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-04 18:24:04.666105 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-04 18:24:04.666114 | orchestrator | 2025-07-04 18:24:04.666123 | orchestrator | 2025-07-04 18:24:04.666133 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:24:04.666142 | orchestrator | Friday 04 July 2025 18:24:02 +0000 (0:00:07.623) 0:02:59.551 *********** 2025-07-04 18:24:04.666152 | orchestrator | =============================================================================== 2025-07-04 18:24:04.666161 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.24s 2025-07-04 18:24:04.666170 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.97s 2025-07-04 18:24:04.666180 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.44s 2025-07-04 18:24:04.666189 | orchestrator | designate : Restart designate-mdns container --------------------------- 13.14s 2025-07-04 18:24:04.666198 | orchestrator | designate : Restart designate-worker container ------------------------- 11.75s 2025-07-04 18:24:04.666208 | orchestrator | designate : Restart designate-api container ----------------------------- 9.11s 2025-07-04 18:24:04.666217 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.62s 2025-07-04 18:24:04.666226 | orchestrator | designate : Restart designate-central container ------------------------- 7.52s 2025-07-04 18:24:04.666236 | orchestrator | designate : Restart designate-producer container ------------------------ 7.23s 2025-07-04 18:24:04.666245 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.02s 2025-07-04 18:24:04.666255 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.90s 2025-07-04 18:24:04.666264 | orchestrator | designate : Copying over config.json files for services ----------------- 6.73s 2025-07-04 18:24:04.666274 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.04s 2025-07-04 18:24:04.666283 | orchestrator | designate : Check designate containers ---------------------------------- 4.06s 2025-07-04 18:24:04.666299 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.02s 2025-07-04 18:24:04.666308 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.91s 2025-07-04 18:24:04.666317 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.67s 2025-07-04 18:24:04.666327 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.64s 2025-07-04 18:24:04.666336 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.49s 2025-07-04 18:24:04.666345 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.33s 2025-07-04 18:24:04.666355 | orchestrator | 2025-07-04 18:24:04 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:04.666365 | orchestrator | 2025-07-04 18:24:04 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:04.666374 | orchestrator | 2025-07-04 18:24:04 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:07.697629 | orchestrator | 2025-07-04 18:24:07 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:07.697714 | orchestrator | 2025-07-04 18:24:07 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:07.697728 | orchestrator | 2025-07-04 18:24:07 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:07.697740 | orchestrator | 2025-07-04 18:24:07 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:07.697751 | orchestrator | 2025-07-04 18:24:07 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:10.737840 | orchestrator | 2025-07-04 18:24:10 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:10.738183 | orchestrator | 2025-07-04 18:24:10 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:10.738982 | orchestrator | 2025-07-04 18:24:10 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:10.741944 | orchestrator | 2025-07-04 18:24:10 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:10.742087 | orchestrator | 2025-07-04 18:24:10 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:13.776148 | orchestrator | 2025-07-04 18:24:13 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:13.779914 | orchestrator | 2025-07-04 18:24:13 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:13.779966 | orchestrator | 2025-07-04 18:24:13 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:13.779975 | orchestrator | 2025-07-04 18:24:13 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:13.779983 | orchestrator | 2025-07-04 18:24:13 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:16.820158 | orchestrator | 2025-07-04 18:24:16 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:16.820266 | orchestrator | 2025-07-04 18:24:16 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:16.821496 | orchestrator | 2025-07-04 18:24:16 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:16.824863 | orchestrator | 2025-07-04 18:24:16 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:16.824969 | orchestrator | 2025-07-04 18:24:16 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:19.877825 | orchestrator | 2025-07-04 18:24:19 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:19.880079 | orchestrator | 2025-07-04 18:24:19 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:19.882159 | orchestrator | 2025-07-04 18:24:19 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:19.884009 | orchestrator | 2025-07-04 18:24:19 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:19.884057 | orchestrator | 2025-07-04 18:24:19 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:22.928165 | orchestrator | 2025-07-04 18:24:22 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:22.929112 | orchestrator | 2025-07-04 18:24:22 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:22.930124 | orchestrator | 2025-07-04 18:24:22 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:22.932268 | orchestrator | 2025-07-04 18:24:22 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:22.932295 | orchestrator | 2025-07-04 18:24:22 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:25.985157 | orchestrator | 2025-07-04 18:24:25 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:25.985283 | orchestrator | 2025-07-04 18:24:25 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:25.985307 | orchestrator | 2025-07-04 18:24:25 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:25.985924 | orchestrator | 2025-07-04 18:24:25 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:25.985952 | orchestrator | 2025-07-04 18:24:25 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:29.045941 | orchestrator | 2025-07-04 18:24:29 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:29.046286 | orchestrator | 2025-07-04 18:24:29 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:29.048368 | orchestrator | 2025-07-04 18:24:29 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:29.049363 | orchestrator | 2025-07-04 18:24:29 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:29.049387 | orchestrator | 2025-07-04 18:24:29 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:32.116959 | orchestrator | 2025-07-04 18:24:32 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:32.117061 | orchestrator | 2025-07-04 18:24:32 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:32.117698 | orchestrator | 2025-07-04 18:24:32 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:32.119731 | orchestrator | 2025-07-04 18:24:32 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:32.119762 | orchestrator | 2025-07-04 18:24:32 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:35.186260 | orchestrator | 2025-07-04 18:24:35 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:35.186338 | orchestrator | 2025-07-04 18:24:35 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:35.186346 | orchestrator | 2025-07-04 18:24:35 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:35.186350 | orchestrator | 2025-07-04 18:24:35 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:35.186372 | orchestrator | 2025-07-04 18:24:35 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:38.268326 | orchestrator | 2025-07-04 18:24:38 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:38.276552 | orchestrator | 2025-07-04 18:24:38 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:38.282114 | orchestrator | 2025-07-04 18:24:38 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:38.291939 | orchestrator | 2025-07-04 18:24:38 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:38.291985 | orchestrator | 2025-07-04 18:24:38 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:41.332612 | orchestrator | 2025-07-04 18:24:41 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:41.333168 | orchestrator | 2025-07-04 18:24:41 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:41.333628 | orchestrator | 2025-07-04 18:24:41 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:41.334291 | orchestrator | 2025-07-04 18:24:41 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:41.334314 | orchestrator | 2025-07-04 18:24:41 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:44.367176 | orchestrator | 2025-07-04 18:24:44 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:44.367303 | orchestrator | 2025-07-04 18:24:44 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state STARTED 2025-07-04 18:24:44.368326 | orchestrator | 2025-07-04 18:24:44 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:44.369231 | orchestrator | 2025-07-04 18:24:44 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:44.369252 | orchestrator | 2025-07-04 18:24:44 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:47.401020 | orchestrator | 2025-07-04 18:24:47 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:47.401303 | orchestrator | 2025-07-04 18:24:47 | INFO  | Task c5a8b9dc-f81d-4d77-a85c-ddebc6597e36 is in state SUCCESS 2025-07-04 18:24:47.403172 | orchestrator | 2025-07-04 18:24:47 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:47.403872 | orchestrator | 2025-07-04 18:24:47 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:24:47.406093 | orchestrator | 2025-07-04 18:24:47 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:47.406185 | orchestrator | 2025-07-04 18:24:47 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:50.456771 | orchestrator | 2025-07-04 18:24:50 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:50.457060 | orchestrator | 2025-07-04 18:24:50 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:50.458299 | orchestrator | 2025-07-04 18:24:50 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:24:50.462219 | orchestrator | 2025-07-04 18:24:50 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:50.462291 | orchestrator | 2025-07-04 18:24:50 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:53.505133 | orchestrator | 2025-07-04 18:24:53 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:53.509988 | orchestrator | 2025-07-04 18:24:53 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:53.510239 | orchestrator | 2025-07-04 18:24:53 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:24:53.512439 | orchestrator | 2025-07-04 18:24:53 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:53.512469 | orchestrator | 2025-07-04 18:24:53 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:56.553009 | orchestrator | 2025-07-04 18:24:56 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:56.553574 | orchestrator | 2025-07-04 18:24:56 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:56.553960 | orchestrator | 2025-07-04 18:24:56 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:24:56.554767 | orchestrator | 2025-07-04 18:24:56 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:56.554807 | orchestrator | 2025-07-04 18:24:56 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:24:59.591239 | orchestrator | 2025-07-04 18:24:59 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:24:59.592865 | orchestrator | 2025-07-04 18:24:59 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:24:59.595881 | orchestrator | 2025-07-04 18:24:59 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:24:59.597600 | orchestrator | 2025-07-04 18:24:59 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:24:59.597644 | orchestrator | 2025-07-04 18:24:59 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:02.641646 | orchestrator | 2025-07-04 18:25:02 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:25:02.643085 | orchestrator | 2025-07-04 18:25:02 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:02.644411 | orchestrator | 2025-07-04 18:25:02 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:02.645901 | orchestrator | 2025-07-04 18:25:02 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:02.645927 | orchestrator | 2025-07-04 18:25:02 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:05.694646 | orchestrator | 2025-07-04 18:25:05 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:25:05.694830 | orchestrator | 2025-07-04 18:25:05 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:05.694884 | orchestrator | 2025-07-04 18:25:05 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:05.695932 | orchestrator | 2025-07-04 18:25:05 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:05.695980 | orchestrator | 2025-07-04 18:25:05 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:08.739276 | orchestrator | 2025-07-04 18:25:08 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:25:08.739615 | orchestrator | 2025-07-04 18:25:08 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:08.741733 | orchestrator | 2025-07-04 18:25:08 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:08.742512 | orchestrator | 2025-07-04 18:25:08 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:08.742549 | orchestrator | 2025-07-04 18:25:08 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:11.775694 | orchestrator | 2025-07-04 18:25:11 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:25:11.775780 | orchestrator | 2025-07-04 18:25:11 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:11.776960 | orchestrator | 2025-07-04 18:25:11 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:11.777777 | orchestrator | 2025-07-04 18:25:11 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:11.777799 | orchestrator | 2025-07-04 18:25:11 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:14.810521 | orchestrator | 2025-07-04 18:25:14 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state STARTED 2025-07-04 18:25:14.811819 | orchestrator | 2025-07-04 18:25:14 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:14.814584 | orchestrator | 2025-07-04 18:25:14 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:14.818481 | orchestrator | 2025-07-04 18:25:14 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:14.818526 | orchestrator | 2025-07-04 18:25:14 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:17.853589 | orchestrator | 2025-07-04 18:25:17.853688 | orchestrator | 2025-07-04 18:25:17.853729 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:25:17.853751 | orchestrator | 2025-07-04 18:25:17.853769 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:25:17.853787 | orchestrator | Friday 04 July 2025 18:24:08 +0000 (0:00:00.779) 0:00:00.779 *********** 2025-07-04 18:25:17.853805 | orchestrator | ok: [testbed-manager] 2025-07-04 18:25:17.853824 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:25:17.853865 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:25:17.853909 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:25:17.853927 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:25:17.853944 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:25:17.853959 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:25:17.854002 | orchestrator | 2025-07-04 18:25:17.854225 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:25:17.854247 | orchestrator | Friday 04 July 2025 18:24:09 +0000 (0:00:01.394) 0:00:02.174 *********** 2025-07-04 18:25:17.854265 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-07-04 18:25:17.854283 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-07-04 18:25:17.854301 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-07-04 18:25:17.854318 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-07-04 18:25:17.854335 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-07-04 18:25:17.854351 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-07-04 18:25:17.854368 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-07-04 18:25:17.854385 | orchestrator | 2025-07-04 18:25:17.854403 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-04 18:25:17.854419 | orchestrator | 2025-07-04 18:25:17.854436 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-07-04 18:25:17.854451 | orchestrator | Friday 04 July 2025 18:24:11 +0000 (0:00:01.673) 0:00:03.847 *********** 2025-07-04 18:25:17.854470 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:25:17.854487 | orchestrator | 2025-07-04 18:25:17.854505 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-07-04 18:25:17.854522 | orchestrator | Friday 04 July 2025 18:24:13 +0000 (0:00:02.069) 0:00:05.917 *********** 2025-07-04 18:25:17.854539 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-07-04 18:25:17.854581 | orchestrator | 2025-07-04 18:25:17.854599 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-07-04 18:25:17.854615 | orchestrator | Friday 04 July 2025 18:24:17 +0000 (0:00:04.245) 0:00:10.162 *********** 2025-07-04 18:25:17.854633 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-07-04 18:25:17.854651 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-07-04 18:25:17.854669 | orchestrator | 2025-07-04 18:25:17.854685 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-07-04 18:25:17.854702 | orchestrator | Friday 04 July 2025 18:24:24 +0000 (0:00:06.590) 0:00:16.752 *********** 2025-07-04 18:25:17.854718 | orchestrator | ok: [testbed-manager] => (item=service) 2025-07-04 18:25:17.854735 | orchestrator | 2025-07-04 18:25:17.854752 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-07-04 18:25:17.854767 | orchestrator | Friday 04 July 2025 18:24:27 +0000 (0:00:03.589) 0:00:20.342 *********** 2025-07-04 18:25:17.854783 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-04 18:25:17.854799 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-07-04 18:25:17.854816 | orchestrator | 2025-07-04 18:25:17.854855 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-07-04 18:25:17.854874 | orchestrator | Friday 04 July 2025 18:24:31 +0000 (0:00:04.105) 0:00:24.447 *********** 2025-07-04 18:25:17.854890 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-07-04 18:25:17.854907 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-07-04 18:25:17.854923 | orchestrator | 2025-07-04 18:25:17.854939 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-07-04 18:25:17.854955 | orchestrator | Friday 04 July 2025 18:24:38 +0000 (0:00:07.030) 0:00:31.478 *********** 2025-07-04 18:25:17.854972 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-07-04 18:25:17.854989 | orchestrator | 2025-07-04 18:25:17.855006 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:25:17.855023 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:25:17.855041 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:25:17.855057 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:25:17.855073 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:25:17.855091 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:25:17.855131 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:25:17.855159 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:25:17.855176 | orchestrator | 2025-07-04 18:25:17.855193 | orchestrator | 2025-07-04 18:25:17.855209 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:25:17.855226 | orchestrator | Friday 04 July 2025 18:24:44 +0000 (0:00:05.914) 0:00:37.393 *********** 2025-07-04 18:25:17.855243 | orchestrator | =============================================================================== 2025-07-04 18:25:17.855259 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.03s 2025-07-04 18:25:17.855277 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.59s 2025-07-04 18:25:17.855308 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.91s 2025-07-04 18:25:17.855325 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.25s 2025-07-04 18:25:17.855342 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.11s 2025-07-04 18:25:17.855359 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.59s 2025-07-04 18:25:17.855375 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.07s 2025-07-04 18:25:17.855392 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.67s 2025-07-04 18:25:17.855409 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.39s 2025-07-04 18:25:17.855425 | orchestrator | 2025-07-04 18:25:17.855441 | orchestrator | 2025-07-04 18:25:17.855457 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:25:17.855474 | orchestrator | 2025-07-04 18:25:17.855489 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:25:17.855533 | orchestrator | Friday 04 July 2025 18:23:14 +0000 (0:00:00.248) 0:00:00.248 *********** 2025-07-04 18:25:17.855552 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:25:17.855569 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:25:17.855585 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:25:17.855601 | orchestrator | 2025-07-04 18:25:17.855618 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:25:17.855635 | orchestrator | Friday 04 July 2025 18:23:14 +0000 (0:00:00.336) 0:00:00.584 *********** 2025-07-04 18:25:17.855650 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-07-04 18:25:17.855668 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-07-04 18:25:17.855684 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-07-04 18:25:17.855700 | orchestrator | 2025-07-04 18:25:17.855716 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-07-04 18:25:17.855733 | orchestrator | 2025-07-04 18:25:17.855749 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-04 18:25:17.855765 | orchestrator | Friday 04 July 2025 18:23:15 +0000 (0:00:01.165) 0:00:01.750 *********** 2025-07-04 18:25:17.855775 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:25:17.855785 | orchestrator | 2025-07-04 18:25:17.855794 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-07-04 18:25:17.855804 | orchestrator | Friday 04 July 2025 18:23:17 +0000 (0:00:01.490) 0:00:03.240 *********** 2025-07-04 18:25:17.855813 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-07-04 18:25:17.855823 | orchestrator | 2025-07-04 18:25:17.855893 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-07-04 18:25:17.855905 | orchestrator | Friday 04 July 2025 18:23:20 +0000 (0:00:03.726) 0:00:06.967 *********** 2025-07-04 18:25:17.855914 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-07-04 18:25:17.855924 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-07-04 18:25:17.855933 | orchestrator | 2025-07-04 18:25:17.855943 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-07-04 18:25:17.855952 | orchestrator | Friday 04 July 2025 18:23:27 +0000 (0:00:06.390) 0:00:13.357 *********** 2025-07-04 18:25:17.855962 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-04 18:25:17.855971 | orchestrator | 2025-07-04 18:25:17.855981 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-07-04 18:25:17.855990 | orchestrator | Friday 04 July 2025 18:23:30 +0000 (0:00:03.246) 0:00:16.604 *********** 2025-07-04 18:25:17.855999 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-04 18:25:17.856009 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-07-04 18:25:17.856065 | orchestrator | 2025-07-04 18:25:17.856076 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-07-04 18:25:17.856086 | orchestrator | Friday 04 July 2025 18:23:34 +0000 (0:00:03.746) 0:00:20.351 *********** 2025-07-04 18:25:17.856095 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-04 18:25:17.856106 | orchestrator | 2025-07-04 18:25:17.856115 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-07-04 18:25:17.856125 | orchestrator | Friday 04 July 2025 18:23:37 +0000 (0:00:03.424) 0:00:23.775 *********** 2025-07-04 18:25:17.856134 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-07-04 18:25:17.856144 | orchestrator | 2025-07-04 18:25:17.856154 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-07-04 18:25:17.856163 | orchestrator | Friday 04 July 2025 18:23:41 +0000 (0:00:04.013) 0:00:27.788 *********** 2025-07-04 18:25:17.856186 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:25:17.856196 | orchestrator | 2025-07-04 18:25:17.856260 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-07-04 18:25:17.856378 | orchestrator | Friday 04 July 2025 18:23:45 +0000 (0:00:03.359) 0:00:31.148 *********** 2025-07-04 18:25:17.856399 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:25:17.856411 | orchestrator | 2025-07-04 18:25:17.856433 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-07-04 18:25:17.856447 | orchestrator | Friday 04 July 2025 18:23:49 +0000 (0:00:04.135) 0:00:35.284 *********** 2025-07-04 18:25:17.856460 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:25:17.856473 | orchestrator | 2025-07-04 18:25:17.856486 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-07-04 18:25:17.856514 | orchestrator | Friday 04 July 2025 18:23:53 +0000 (0:00:03.823) 0:00:39.107 *********** 2025-07-04 18:25:17.856531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.856549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.856564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.856589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.856643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.856659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.856673 | orchestrator | 2025-07-04 18:25:17.856688 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-07-04 18:25:17.856702 | orchestrator | Friday 04 July 2025 18:23:54 +0000 (0:00:01.403) 0:00:40.511 *********** 2025-07-04 18:25:17.856715 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:25:17.856729 | orchestrator | 2025-07-04 18:25:17.856742 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-07-04 18:25:17.856755 | orchestrator | Friday 04 July 2025 18:23:54 +0000 (0:00:00.119) 0:00:40.630 *********** 2025-07-04 18:25:17.856768 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:25:17.856782 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:25:17.856796 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:25:17.856810 | orchestrator | 2025-07-04 18:25:17.856823 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-07-04 18:25:17.856859 | orchestrator | Friday 04 July 2025 18:23:54 +0000 (0:00:00.387) 0:00:41.017 *********** 2025-07-04 18:25:17.856871 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 18:25:17.856884 | orchestrator | 2025-07-04 18:25:17.856897 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-07-04 18:25:17.856915 | orchestrator | Friday 04 July 2025 18:23:55 +0000 (0:00:00.841) 0:00:41.858 *********** 2025-07-04 18:25:17.856927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.856939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.856969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.856985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.856999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857025 | orchestrator | 2025-07-04 18:25:17.857033 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-07-04 18:25:17.857041 | orchestrator | Friday 04 July 2025 18:23:58 +0000 (0:00:02.496) 0:00:44.355 *********** 2025-07-04 18:25:17.857049 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:25:17.857058 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:25:17.857071 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:25:17.857084 | orchestrator | 2025-07-04 18:25:17.857097 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-04 18:25:17.857110 | orchestrator | Friday 04 July 2025 18:23:58 +0000 (0:00:00.314) 0:00:44.670 *********** 2025-07-04 18:25:17.857124 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:25:17.857137 | orchestrator | 2025-07-04 18:25:17.857148 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-07-04 18:25:17.857156 | orchestrator | Friday 04 July 2025 18:23:59 +0000 (0:00:00.806) 0:00:45.476 *********** 2025-07-04 18:25:17.857176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.857185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.857199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.857208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857242 | orchestrator | 2025-07-04 18:25:17.857250 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-07-04 18:25:17.857257 | orchestrator | Friday 04 July 2025 18:24:01 +0000 (0:00:02.476) 0:00:47.953 *********** 2025-07-04 18:25:17.857266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-04 18:25:17.857284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:25:17.857293 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:25:17.857301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-04 18:25:17.857314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:25:17.857323 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:25:17.857334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-04 18:25:17.857347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:25:17.857355 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:25:17.857363 | orchestrator | 2025-07-04 18:25:17.857371 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-07-04 18:25:17.857379 | orchestrator | Friday 04 July 2025 18:24:03 +0000 (0:00:01.133) 0:00:49.087 *********** 2025-07-04 18:25:17.857387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-04 18:25:17.857396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:25:17.857404 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:25:17.857421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-04 18:25:17.857430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:25:17.857443 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:25:17.857452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-04 18:25:17.857460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:25:17.857468 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:25:17.857476 | orchestrator | 2025-07-04 18:25:17.857484 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-07-04 18:25:17.857492 | orchestrator | Friday 04 July 2025 18:24:04 +0000 (0:00:01.365) 0:00:50.453 *********** 2025-07-04 18:25:17.857500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.857517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.857531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.857539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857564 | orchestrator | 2025-07-04 18:25:17.857575 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-07-04 18:25:17.857587 | orchestrator | Friday 04 July 2025 18:24:06 +0000 (0:00:02.598) 0:00:53.052 *********** 2025-07-04 18:25:17.857595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.857608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.857617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.857625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857664 | orchestrator | 2025-07-04 18:25:17.857672 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-07-04 18:25:17.857680 | orchestrator | Friday 04 July 2025 18:24:15 +0000 (0:00:08.652) 0:01:01.704 *********** 2025-07-04 18:25:17.857688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-04 18:25:17.857696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:25:17.857704 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:25:17.857712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-04 18:25:17.857731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:25:17.857744 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:25:17.857752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-04 18:25:17.857761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:25:17.857769 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:25:17.857777 | orchestrator | 2025-07-04 18:25:17.857784 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-07-04 18:25:17.857792 | orchestrator | Friday 04 July 2025 18:24:16 +0000 (0:00:01.322) 0:01:03.027 *********** 2025-07-04 18:25:17.857800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.857823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.857869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-04 18:25:17.857886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:25:17.857927 | orchestrator | 2025-07-04 18:25:17.857941 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-04 18:25:17.857954 | orchestrator | Friday 04 July 2025 18:24:20 +0000 (0:00:03.088) 0:01:06.115 *********** 2025-07-04 18:25:17.857979 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:25:17.857992 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:25:17.858005 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:25:17.858053 | orchestrator | 2025-07-04 18:25:17.858071 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-07-04 18:25:17.858085 | orchestrator | Friday 04 July 2025 18:24:20 +0000 (0:00:00.396) 0:01:06.512 *********** 2025-07-04 18:25:17.858099 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:25:17.858113 | orchestrator | 2025-07-04 18:25:17.858128 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-07-04 18:25:17.858141 | orchestrator | Friday 04 July 2025 18:24:22 +0000 (0:00:02.258) 0:01:08.770 *********** 2025-07-04 18:25:17.858155 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:25:17.858170 | orchestrator | 2025-07-04 18:25:17.858185 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-07-04 18:25:17.858209 | orchestrator | Friday 04 July 2025 18:24:25 +0000 (0:00:02.630) 0:01:11.401 *********** 2025-07-04 18:25:17.858222 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:25:17.858259 | orchestrator | 2025-07-04 18:25:17.858296 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-04 18:25:17.858310 | orchestrator | Friday 04 July 2025 18:24:44 +0000 (0:00:19.025) 0:01:30.426 *********** 2025-07-04 18:25:17.858323 | orchestrator | 2025-07-04 18:25:17.858336 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-04 18:25:17.858349 | orchestrator | Friday 04 July 2025 18:24:44 +0000 (0:00:00.144) 0:01:30.571 *********** 2025-07-04 18:25:17.858363 | orchestrator | 2025-07-04 18:25:17.858376 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-04 18:25:17.858389 | orchestrator | Friday 04 July 2025 18:24:44 +0000 (0:00:00.154) 0:01:30.726 *********** 2025-07-04 18:25:17.858403 | orchestrator | 2025-07-04 18:25:17.858416 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-07-04 18:25:17.858430 | orchestrator | Friday 04 July 2025 18:24:44 +0000 (0:00:00.143) 0:01:30.869 *********** 2025-07-04 18:25:17.858444 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:25:17.858457 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:25:17.858469 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:25:17.858477 | orchestrator | 2025-07-04 18:25:17.858485 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-07-04 18:25:17.858493 | orchestrator | Friday 04 July 2025 18:25:05 +0000 (0:00:20.677) 0:01:51.546 *********** 2025-07-04 18:25:17.858501 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:25:17.858509 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:25:17.858517 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:25:17.858539 | orchestrator | 2025-07-04 18:25:17.858547 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:25:17.858556 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-04 18:25:17.858567 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 18:25:17.858581 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-04 18:25:17.858594 | orchestrator | 2025-07-04 18:25:17.858607 | orchestrator | 2025-07-04 18:25:17.858619 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:25:17.858633 | orchestrator | Friday 04 July 2025 18:25:16 +0000 (0:00:10.990) 0:02:02.537 *********** 2025-07-04 18:25:17.858647 | orchestrator | =============================================================================== 2025-07-04 18:25:17.858660 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.68s 2025-07-04 18:25:17.858674 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 19.03s 2025-07-04 18:25:17.858697 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.99s 2025-07-04 18:25:17.858713 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 8.65s 2025-07-04 18:25:17.858727 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.39s 2025-07-04 18:25:17.858741 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.14s 2025-07-04 18:25:17.858756 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.01s 2025-07-04 18:25:17.858770 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.82s 2025-07-04 18:25:17.858785 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.75s 2025-07-04 18:25:17.858800 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.72s 2025-07-04 18:25:17.858808 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.42s 2025-07-04 18:25:17.858822 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.36s 2025-07-04 18:25:17.858863 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.25s 2025-07-04 18:25:17.858878 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.09s 2025-07-04 18:25:17.858892 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.63s 2025-07-04 18:25:17.858905 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.60s 2025-07-04 18:25:17.858919 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.50s 2025-07-04 18:25:17.858932 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.48s 2025-07-04 18:25:17.858945 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.26s 2025-07-04 18:25:17.858959 | orchestrator | magnum : include_tasks -------------------------------------------------- 1.50s 2025-07-04 18:25:17.858972 | orchestrator | 2025-07-04 18:25:17 | INFO  | Task ff5b3e53-7431-45f2-a9cd-7d462a596cf0 is in state SUCCESS 2025-07-04 18:25:17.858986 | orchestrator | 2025-07-04 18:25:17 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:17.859000 | orchestrator | 2025-07-04 18:25:17 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:17.859012 | orchestrator | 2025-07-04 18:25:17 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:17.859020 | orchestrator | 2025-07-04 18:25:17 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:20.893152 | orchestrator | 2025-07-04 18:25:20 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:25:20.893579 | orchestrator | 2025-07-04 18:25:20 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:20.894309 | orchestrator | 2025-07-04 18:25:20 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:20.894948 | orchestrator | 2025-07-04 18:25:20 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:20.894982 | orchestrator | 2025-07-04 18:25:20 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:23.939141 | orchestrator | 2025-07-04 18:25:23 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:25:23.939748 | orchestrator | 2025-07-04 18:25:23 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:23.941147 | orchestrator | 2025-07-04 18:25:23 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:23.941913 | orchestrator | 2025-07-04 18:25:23 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:23.942134 | orchestrator | 2025-07-04 18:25:23 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:26.980163 | orchestrator | 2025-07-04 18:25:26 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:25:26.981432 | orchestrator | 2025-07-04 18:25:26 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:26.984527 | orchestrator | 2025-07-04 18:25:26 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:26.985360 | orchestrator | 2025-07-04 18:25:26 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:26.985395 | orchestrator | 2025-07-04 18:25:26 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:30.019907 | orchestrator | 2025-07-04 18:25:30 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:25:30.020944 | orchestrator | 2025-07-04 18:25:30 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:30.022124 | orchestrator | 2025-07-04 18:25:30 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:30.022653 | orchestrator | 2025-07-04 18:25:30 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:30.022669 | orchestrator | 2025-07-04 18:25:30 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:33.065801 | orchestrator | 2025-07-04 18:25:33 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:25:33.065934 | orchestrator | 2025-07-04 18:25:33 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:33.066481 | orchestrator | 2025-07-04 18:25:33 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:33.067764 | orchestrator | 2025-07-04 18:25:33 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:33.068089 | orchestrator | 2025-07-04 18:25:33 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:36.101742 | orchestrator | 2025-07-04 18:25:36 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:25:36.102559 | orchestrator | 2025-07-04 18:25:36 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:36.106949 | orchestrator | 2025-07-04 18:25:36 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:36.108222 | orchestrator | 2025-07-04 18:25:36 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:36.109790 | orchestrator | 2025-07-04 18:25:36 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:39.137588 | orchestrator | 2025-07-04 18:25:39 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:25:39.138986 | orchestrator | 2025-07-04 18:25:39 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:39.142450 | orchestrator | 2025-07-04 18:25:39 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:39.144225 | orchestrator | 2025-07-04 18:25:39 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:39.144253 | orchestrator | 2025-07-04 18:25:39 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:42.178122 | orchestrator | 2025-07-04 18:25:42 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:25:42.179005 | orchestrator | 2025-07-04 18:25:42 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:42.180202 | orchestrator | 2025-07-04 18:25:42 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:42.181020 | orchestrator | 2025-07-04 18:25:42 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:42.181163 | orchestrator | 2025-07-04 18:25:42 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:45.239786 | orchestrator | 2025-07-04 18:25:45 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:25:45.241081 | orchestrator | 2025-07-04 18:25:45 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:45.244052 | orchestrator | 2025-07-04 18:25:45 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:45.245770 | orchestrator | 2025-07-04 18:25:45 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:45.245865 | orchestrator | 2025-07-04 18:25:45 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:48.290101 | orchestrator | 2025-07-04 18:25:48 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:25:48.291257 | orchestrator | 2025-07-04 18:25:48 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:48.292479 | orchestrator | 2025-07-04 18:25:48 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:48.297636 | orchestrator | 2025-07-04 18:25:48 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:48.299034 | orchestrator | 2025-07-04 18:25:48 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:51.338862 | orchestrator | 2025-07-04 18:25:51 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:25:51.339184 | orchestrator | 2025-07-04 18:25:51 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:51.341354 | orchestrator | 2025-07-04 18:25:51 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:51.341609 | orchestrator | 2025-07-04 18:25:51 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:51.341629 | orchestrator | 2025-07-04 18:25:51 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:54.376060 | orchestrator | 2025-07-04 18:25:54 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:25:54.376160 | orchestrator | 2025-07-04 18:25:54 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:54.376181 | orchestrator | 2025-07-04 18:25:54 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:54.376212 | orchestrator | 2025-07-04 18:25:54 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:54.376229 | orchestrator | 2025-07-04 18:25:54 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:25:57.407115 | orchestrator | 2025-07-04 18:25:57 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:25:57.409004 | orchestrator | 2025-07-04 18:25:57 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:25:57.409034 | orchestrator | 2025-07-04 18:25:57 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:25:57.409045 | orchestrator | 2025-07-04 18:25:57 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:25:57.409057 | orchestrator | 2025-07-04 18:25:57 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:00.439532 | orchestrator | 2025-07-04 18:26:00 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:00.439747 | orchestrator | 2025-07-04 18:26:00 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:26:00.440602 | orchestrator | 2025-07-04 18:26:00 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:00.441457 | orchestrator | 2025-07-04 18:26:00 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:00.441500 | orchestrator | 2025-07-04 18:26:00 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:03.475687 | orchestrator | 2025-07-04 18:26:03 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:03.477300 | orchestrator | 2025-07-04 18:26:03 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:26:03.478472 | orchestrator | 2025-07-04 18:26:03 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:03.479878 | orchestrator | 2025-07-04 18:26:03 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:03.480007 | orchestrator | 2025-07-04 18:26:03 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:06.526720 | orchestrator | 2025-07-04 18:26:06 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:06.526843 | orchestrator | 2025-07-04 18:26:06 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:26:06.526859 | orchestrator | 2025-07-04 18:26:06 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:06.529688 | orchestrator | 2025-07-04 18:26:06 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:06.529761 | orchestrator | 2025-07-04 18:26:06 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:09.555109 | orchestrator | 2025-07-04 18:26:09 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:09.555207 | orchestrator | 2025-07-04 18:26:09 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:26:09.556740 | orchestrator | 2025-07-04 18:26:09 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:09.556832 | orchestrator | 2025-07-04 18:26:09 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:09.556852 | orchestrator | 2025-07-04 18:26:09 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:12.584952 | orchestrator | 2025-07-04 18:26:12 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:12.586297 | orchestrator | 2025-07-04 18:26:12 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:26:12.586988 | orchestrator | 2025-07-04 18:26:12 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:12.587678 | orchestrator | 2025-07-04 18:26:12 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:12.587697 | orchestrator | 2025-07-04 18:26:12 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:15.614130 | orchestrator | 2025-07-04 18:26:15 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:15.615060 | orchestrator | 2025-07-04 18:26:15 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:26:15.615868 | orchestrator | 2025-07-04 18:26:15 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:15.616966 | orchestrator | 2025-07-04 18:26:15 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:15.616993 | orchestrator | 2025-07-04 18:26:15 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:18.650240 | orchestrator | 2025-07-04 18:26:18 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:18.653283 | orchestrator | 2025-07-04 18:26:18 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:26:18.654078 | orchestrator | 2025-07-04 18:26:18 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:18.655102 | orchestrator | 2025-07-04 18:26:18 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:18.655252 | orchestrator | 2025-07-04 18:26:18 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:21.691892 | orchestrator | 2025-07-04 18:26:21 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:21.691980 | orchestrator | 2025-07-04 18:26:21 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:26:21.694233 | orchestrator | 2025-07-04 18:26:21 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:21.695982 | orchestrator | 2025-07-04 18:26:21 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:21.696007 | orchestrator | 2025-07-04 18:26:21 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:24.726109 | orchestrator | 2025-07-04 18:26:24 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:24.727668 | orchestrator | 2025-07-04 18:26:24 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:26:24.728394 | orchestrator | 2025-07-04 18:26:24 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:24.728913 | orchestrator | 2025-07-04 18:26:24 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:24.728941 | orchestrator | 2025-07-04 18:26:24 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:27.757901 | orchestrator | 2025-07-04 18:26:27 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:27.759641 | orchestrator | 2025-07-04 18:26:27 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:26:27.759693 | orchestrator | 2025-07-04 18:26:27 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:27.760651 | orchestrator | 2025-07-04 18:26:27 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:27.760685 | orchestrator | 2025-07-04 18:26:27 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:30.792668 | orchestrator | 2025-07-04 18:26:30 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:30.793759 | orchestrator | 2025-07-04 18:26:30 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state STARTED 2025-07-04 18:26:30.796544 | orchestrator | 2025-07-04 18:26:30 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:30.800380 | orchestrator | 2025-07-04 18:26:30 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:30.800572 | orchestrator | 2025-07-04 18:26:30 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:33.831171 | orchestrator | 2025-07-04 18:26:33 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:33.833174 | orchestrator | 2025-07-04 18:26:33 | INFO  | Task 1c1af083-d8d2-4b30-8bdc-e75fc75e0db0 is in state SUCCESS 2025-07-04 18:26:33.834987 | orchestrator | 2025-07-04 18:26:33.835071 | orchestrator | 2025-07-04 18:26:33.835157 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:26:33.835220 | orchestrator | 2025-07-04 18:26:33.835283 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:26:33.835320 | orchestrator | Friday 04 July 2025 18:21:02 +0000 (0:00:00.478) 0:00:00.478 *********** 2025-07-04 18:26:33.835332 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:26:33.835345 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:26:33.835356 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:26:33.835367 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:26:33.835377 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:26:33.835388 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:26:33.835399 | orchestrator | 2025-07-04 18:26:33.835410 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:26:33.835420 | orchestrator | Friday 04 July 2025 18:21:03 +0000 (0:00:00.770) 0:00:01.249 *********** 2025-07-04 18:26:33.835431 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-07-04 18:26:33.835443 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-07-04 18:26:33.835453 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-07-04 18:26:33.835464 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-07-04 18:26:33.835475 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-07-04 18:26:33.835485 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-07-04 18:26:33.835496 | orchestrator | 2025-07-04 18:26:33.835507 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-07-04 18:26:33.835520 | orchestrator | 2025-07-04 18:26:33.835532 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-04 18:26:33.835543 | orchestrator | Friday 04 July 2025 18:21:04 +0000 (0:00:00.698) 0:00:01.947 *********** 2025-07-04 18:26:33.835556 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:26:33.835569 | orchestrator | 2025-07-04 18:26:33.835582 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-07-04 18:26:33.835594 | orchestrator | Friday 04 July 2025 18:21:05 +0000 (0:00:01.321) 0:00:03.269 *********** 2025-07-04 18:26:33.835606 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:26:33.835618 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:26:33.835630 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:26:33.835643 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:26:33.835655 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:26:33.835666 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:26:33.835678 | orchestrator | 2025-07-04 18:26:33.835690 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-07-04 18:26:33.835703 | orchestrator | Friday 04 July 2025 18:21:07 +0000 (0:00:01.382) 0:00:04.652 *********** 2025-07-04 18:26:33.835714 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:26:33.835727 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:26:33.835738 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:26:33.835750 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:26:33.835762 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:26:33.835774 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:26:33.835818 | orchestrator | 2025-07-04 18:26:33.835832 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-07-04 18:26:33.835845 | orchestrator | Friday 04 July 2025 18:21:08 +0000 (0:00:01.313) 0:00:05.965 *********** 2025-07-04 18:26:33.835857 | orchestrator | ok: [testbed-node-0] => { 2025-07-04 18:26:33.835869 | orchestrator |  "changed": false, 2025-07-04 18:26:33.835879 | orchestrator |  "msg": "All assertions passed" 2025-07-04 18:26:33.835890 | orchestrator | } 2025-07-04 18:26:33.835913 | orchestrator | ok: [testbed-node-1] => { 2025-07-04 18:26:33.835925 | orchestrator |  "changed": false, 2025-07-04 18:26:33.835935 | orchestrator |  "msg": "All assertions passed" 2025-07-04 18:26:33.835946 | orchestrator | } 2025-07-04 18:26:33.835957 | orchestrator | ok: [testbed-node-2] => { 2025-07-04 18:26:33.835967 | orchestrator |  "changed": false, 2025-07-04 18:26:33.835987 | orchestrator |  "msg": "All assertions passed" 2025-07-04 18:26:33.835998 | orchestrator | } 2025-07-04 18:26:33.836009 | orchestrator | ok: [testbed-node-3] => { 2025-07-04 18:26:33.836020 | orchestrator |  "changed": false, 2025-07-04 18:26:33.836030 | orchestrator |  "msg": "All assertions passed" 2025-07-04 18:26:33.836041 | orchestrator | } 2025-07-04 18:26:33.836052 | orchestrator | ok: [testbed-node-4] => { 2025-07-04 18:26:33.836063 | orchestrator |  "changed": false, 2025-07-04 18:26:33.836073 | orchestrator |  "msg": "All assertions passed" 2025-07-04 18:26:33.836084 | orchestrator | } 2025-07-04 18:26:33.836095 | orchestrator | ok: [testbed-node-5] => { 2025-07-04 18:26:33.836105 | orchestrator |  "changed": false, 2025-07-04 18:26:33.836116 | orchestrator |  "msg": "All assertions passed" 2025-07-04 18:26:33.836127 | orchestrator | } 2025-07-04 18:26:33.836137 | orchestrator | 2025-07-04 18:26:33.836148 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-07-04 18:26:33.836159 | orchestrator | Friday 04 July 2025 18:21:09 +0000 (0:00:00.901) 0:00:06.866 *********** 2025-07-04 18:26:33.836169 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.836180 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.836191 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.836201 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.836212 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.836222 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.836233 | orchestrator | 2025-07-04 18:26:33.836244 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-07-04 18:26:33.836254 | orchestrator | Friday 04 July 2025 18:21:09 +0000 (0:00:00.630) 0:00:07.497 *********** 2025-07-04 18:26:33.836265 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-07-04 18:26:33.836276 | orchestrator | 2025-07-04 18:26:33.836287 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-07-04 18:26:33.836297 | orchestrator | Friday 04 July 2025 18:21:13 +0000 (0:00:03.331) 0:00:10.828 *********** 2025-07-04 18:26:33.836308 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-07-04 18:26:33.836320 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-07-04 18:26:33.836330 | orchestrator | 2025-07-04 18:26:33.836355 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-07-04 18:26:33.836366 | orchestrator | Friday 04 July 2025 18:21:19 +0000 (0:00:06.460) 0:00:17.288 *********** 2025-07-04 18:26:33.836392 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-04 18:26:33.836403 | orchestrator | 2025-07-04 18:26:33.836414 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-07-04 18:26:33.836424 | orchestrator | Friday 04 July 2025 18:21:22 +0000 (0:00:03.222) 0:00:20.511 *********** 2025-07-04 18:26:33.836435 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-04 18:26:33.836446 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-07-04 18:26:33.836456 | orchestrator | 2025-07-04 18:26:33.836467 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-07-04 18:26:33.836477 | orchestrator | Friday 04 July 2025 18:21:26 +0000 (0:00:03.902) 0:00:24.414 *********** 2025-07-04 18:26:33.836488 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-04 18:26:33.836499 | orchestrator | 2025-07-04 18:26:33.836509 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-07-04 18:26:33.836520 | orchestrator | Friday 04 July 2025 18:21:30 +0000 (0:00:03.695) 0:00:28.109 *********** 2025-07-04 18:26:33.836530 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-07-04 18:26:33.836541 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-07-04 18:26:33.836551 | orchestrator | 2025-07-04 18:26:33.836562 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-04 18:26:33.836572 | orchestrator | Friday 04 July 2025 18:21:38 +0000 (0:00:07.858) 0:00:35.967 *********** 2025-07-04 18:26:33.836592 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.836603 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.836613 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.836624 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.836634 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.836645 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.836655 | orchestrator | 2025-07-04 18:26:33.836666 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-07-04 18:26:33.836677 | orchestrator | Friday 04 July 2025 18:21:39 +0000 (0:00:00.688) 0:00:36.656 *********** 2025-07-04 18:26:33.836687 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.836697 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.836708 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.836718 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.836729 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.836739 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.836750 | orchestrator | 2025-07-04 18:26:33.836760 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-07-04 18:26:33.836771 | orchestrator | Friday 04 July 2025 18:21:42 +0000 (0:00:03.400) 0:00:40.056 *********** 2025-07-04 18:26:33.836797 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:26:33.836808 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:26:33.836818 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:26:33.836829 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:26:33.836839 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:26:33.836850 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:26:33.836860 | orchestrator | 2025-07-04 18:26:33.836871 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-04 18:26:33.836882 | orchestrator | Friday 04 July 2025 18:21:43 +0000 (0:00:01.151) 0:00:41.208 *********** 2025-07-04 18:26:33.836893 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.836903 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.836914 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.836925 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.836941 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.836952 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.836963 | orchestrator | 2025-07-04 18:26:33.836973 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-07-04 18:26:33.836984 | orchestrator | Friday 04 July 2025 18:21:46 +0000 (0:00:03.084) 0:00:44.292 *********** 2025-07-04 18:26:33.836998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.837022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.837041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.837053 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.837070 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.837081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.837092 | orchestrator | 2025-07-04 18:26:33.837103 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-07-04 18:26:33.837114 | orchestrator | Friday 04 July 2025 18:21:50 +0000 (0:00:03.546) 0:00:47.839 *********** 2025-07-04 18:26:33.837125 | orchestrator | [WARNING]: Skipped 2025-07-04 18:26:33.837136 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-07-04 18:26:33.837153 | orchestrator | due to this access issue: 2025-07-04 18:26:33.837165 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-07-04 18:26:33.837175 | orchestrator | a directory 2025-07-04 18:26:33.837186 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 18:26:33.837197 | orchestrator | 2025-07-04 18:26:33.837213 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-04 18:26:33.837224 | orchestrator | Friday 04 July 2025 18:21:51 +0000 (0:00:01.187) 0:00:49.026 *********** 2025-07-04 18:26:33.837235 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:26:33.837247 | orchestrator | 2025-07-04 18:26:33.837257 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-07-04 18:26:33.837268 | orchestrator | Friday 04 July 2025 18:21:52 +0000 (0:00:01.310) 0:00:50.336 *********** 2025-07-04 18:26:33.837279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.837291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.837307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.837319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.837343 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.837355 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.837366 | orchestrator | 2025-07-04 18:26:33.837378 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-07-04 18:26:33.837389 | orchestrator | Friday 04 July 2025 18:21:58 +0000 (0:00:05.622) 0:00:55.959 *********** 2025-07-04 18:26:33.837404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.837416 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.837428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.837446 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.837464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.837476 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.837487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.837498 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.837509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.837520 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.837544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.837556 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.837567 | orchestrator | 2025-07-04 18:26:33.837578 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-07-04 18:26:33.837589 | orchestrator | Friday 04 July 2025 18:22:02 +0000 (0:00:03.675) 0:00:59.634 *********** 2025-07-04 18:26:33.837607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.837618 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.837636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.837647 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.837659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.837670 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.837681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.837692 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.837708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.837725 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.837736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.837748 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.837758 | orchestrator | 2025-07-04 18:26:33.837769 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-07-04 18:26:33.837813 | orchestrator | Friday 04 July 2025 18:22:06 +0000 (0:00:04.532) 0:01:04.167 *********** 2025-07-04 18:26:33.837825 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.837836 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.837846 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.837857 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.837868 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.837878 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.837889 | orchestrator | 2025-07-04 18:26:33.837900 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-07-04 18:26:33.837910 | orchestrator | Friday 04 July 2025 18:22:11 +0000 (0:00:04.347) 0:01:08.514 *********** 2025-07-04 18:26:33.837921 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.837932 | orchestrator | 2025-07-04 18:26:33.837943 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-07-04 18:26:33.837953 | orchestrator | Friday 04 July 2025 18:22:11 +0000 (0:00:00.302) 0:01:08.816 *********** 2025-07-04 18:26:33.837964 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.837975 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.837985 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.837996 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.838007 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.838068 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.838083 | orchestrator | 2025-07-04 18:26:33.838094 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-07-04 18:26:33.838105 | orchestrator | Friday 04 July 2025 18:22:12 +0000 (0:00:01.152) 0:01:09.969 *********** 2025-07-04 18:26:33.838116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.838136 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.838153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.838165 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.838176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.838187 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.839275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.839361 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.839380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.839393 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.839426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.839438 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.839450 | orchestrator | 2025-07-04 18:26:33.839461 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-07-04 18:26:33.839473 | orchestrator | Friday 04 July 2025 18:22:16 +0000 (0:00:04.302) 0:01:14.272 *********** 2025-07-04 18:26:33.839493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.839522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.839535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.839547 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.839580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.839601 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.839621 | orchestrator | 2025-07-04 18:26:33.839640 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-07-04 18:26:33.839652 | orchestrator | Friday 04 July 2025 18:22:21 +0000 (0:00:05.117) 0:01:19.389 *********** 2025-07-04 18:26:33.839672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.839685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.839704 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.839720 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.839732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.839751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.839763 | orchestrator | 2025-07-04 18:26:33.839774 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-07-04 18:26:33.839820 | orchestrator | Friday 04 July 2025 18:22:30 +0000 (0:00:08.189) 0:01:27.578 *********** 2025-07-04 18:26:33.839835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.839855 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.839868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.839881 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.839898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.839912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.839934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.839953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.839966 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.839978 | orchestrator | 2025-07-04 18:26:33.839991 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-07-04 18:26:33.840004 | orchestrator | Friday 04 July 2025 18:22:32 +0000 (0:00:02.862) 0:01:30.441 *********** 2025-07-04 18:26:33.840017 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.840029 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.840041 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:26:33.840053 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:26:33.840065 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:26:33.840077 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.840089 | orchestrator | 2025-07-04 18:26:33.840101 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-07-04 18:26:33.840114 | orchestrator | Friday 04 July 2025 18:22:35 +0000 (0:00:02.496) 0:01:32.938 *********** 2025-07-04 18:26:33.840131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.840144 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.840158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.840171 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.840189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.840207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.840219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.840230 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.840242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.840253 | orchestrator | 2025-07-04 18:26:33.840264 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-07-04 18:26:33.840275 | orchestrator | Friday 04 July 2025 18:22:39 +0000 (0:00:03.709) 0:01:36.647 *********** 2025-07-04 18:26:33.840285 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.840296 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.840307 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.840317 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.840328 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.840339 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.840349 | orchestrator | 2025-07-04 18:26:33.840360 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-07-04 18:26:33.840371 | orchestrator | Friday 04 July 2025 18:22:41 +0000 (0:00:02.404) 0:01:39.051 *********** 2025-07-04 18:26:33.840382 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.840403 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.840414 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.840425 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.840436 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.840446 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.840458 | orchestrator | 2025-07-04 18:26:33.840468 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-07-04 18:26:33.840479 | orchestrator | Friday 04 July 2025 18:22:43 +0000 (0:00:02.316) 0:01:41.367 *********** 2025-07-04 18:26:33.840490 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.840501 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.840512 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.840529 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.840540 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.840550 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.840561 | orchestrator | 2025-07-04 18:26:33.840572 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-07-04 18:26:33.840583 | orchestrator | Friday 04 July 2025 18:22:47 +0000 (0:00:03.645) 0:01:45.013 *********** 2025-07-04 18:26:33.840594 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.840605 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.840616 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.840627 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.840637 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.840648 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.840658 | orchestrator | 2025-07-04 18:26:33.840669 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-07-04 18:26:33.840680 | orchestrator | Friday 04 July 2025 18:22:50 +0000 (0:00:02.600) 0:01:47.613 *********** 2025-07-04 18:26:33.840691 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.840701 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.840712 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.840723 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.840734 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.840744 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.840755 | orchestrator | 2025-07-04 18:26:33.840765 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-07-04 18:26:33.840776 | orchestrator | Friday 04 July 2025 18:22:52 +0000 (0:00:02.325) 0:01:49.938 *********** 2025-07-04 18:26:33.840825 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.840846 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.840857 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.840868 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.840879 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.840890 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.840901 | orchestrator | 2025-07-04 18:26:33.840989 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-07-04 18:26:33.841010 | orchestrator | Friday 04 July 2025 18:22:56 +0000 (0:00:03.868) 0:01:53.807 *********** 2025-07-04 18:26:33.841021 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-04 18:26:33.841032 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-04 18:26:33.841043 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.841054 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.841065 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-04 18:26:33.841076 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.841087 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-04 18:26:33.841098 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.841109 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-04 18:26:33.841128 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.841139 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-04 18:26:33.841150 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.841161 | orchestrator | 2025-07-04 18:26:33.841172 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-07-04 18:26:33.841183 | orchestrator | Friday 04 July 2025 18:22:59 +0000 (0:00:03.157) 0:01:56.964 *********** 2025-07-04 18:26:33.841199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.841212 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.841233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.841245 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.841257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.841268 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.841279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.841297 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.841313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.841325 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.841337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.841348 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.841359 | orchestrator | 2025-07-04 18:26:33.841370 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-07-04 18:26:33.841381 | orchestrator | Friday 04 July 2025 18:23:02 +0000 (0:00:03.091) 0:02:00.056 *********** 2025-07-04 18:26:33.841399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.841412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.841430 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.841441 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.841457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.841469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.841481 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.841492 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.841503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.841515 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.841534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.841547 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.841558 | orchestrator | 2025-07-04 18:26:33.841569 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-07-04 18:26:33.841580 | orchestrator | Friday 04 July 2025 18:23:05 +0000 (0:00:03.050) 0:02:03.106 *********** 2025-07-04 18:26:33.841592 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.841609 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.841620 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.841630 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.841642 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.841652 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.841663 | orchestrator | 2025-07-04 18:26:33.841674 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-07-04 18:26:33.841695 | orchestrator | Friday 04 July 2025 18:23:09 +0000 (0:00:03.714) 0:02:06.820 *********** 2025-07-04 18:26:33.841706 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.841717 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.841728 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.841739 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:26:33.841750 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:26:33.841762 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:26:33.841773 | orchestrator | 2025-07-04 18:26:33.841807 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-07-04 18:26:33.841820 | orchestrator | Friday 04 July 2025 18:23:14 +0000 (0:00:04.807) 0:02:11.628 *********** 2025-07-04 18:26:33.841831 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.841842 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.841853 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.841864 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.841875 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.841886 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.841897 | orchestrator | 2025-07-04 18:26:33.841909 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-07-04 18:26:33.841920 | orchestrator | Friday 04 July 2025 18:23:18 +0000 (0:00:04.111) 0:02:15.740 *********** 2025-07-04 18:26:33.841930 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.841942 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.841953 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.841964 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.841974 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.841986 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.841996 | orchestrator | 2025-07-04 18:26:33.842007 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-07-04 18:26:33.842095 | orchestrator | Friday 04 July 2025 18:23:21 +0000 (0:00:02.992) 0:02:18.732 *********** 2025-07-04 18:26:33.842125 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.842144 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.842163 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.842174 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.842185 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.842196 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.842207 | orchestrator | 2025-07-04 18:26:33.842218 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-07-04 18:26:33.842229 | orchestrator | Friday 04 July 2025 18:23:23 +0000 (0:00:02.062) 0:02:20.794 *********** 2025-07-04 18:26:33.842240 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.842251 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.842261 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.842272 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.842282 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.842293 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.842304 | orchestrator | 2025-07-04 18:26:33.842315 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-07-04 18:26:33.842325 | orchestrator | Friday 04 July 2025 18:23:26 +0000 (0:00:03.195) 0:02:23.990 *********** 2025-07-04 18:26:33.842336 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.842347 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.842357 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.842367 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.842388 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.842399 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.842409 | orchestrator | 2025-07-04 18:26:33.842434 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-07-04 18:26:33.842445 | orchestrator | Friday 04 July 2025 18:23:28 +0000 (0:00:02.435) 0:02:26.426 *********** 2025-07-04 18:26:33.842456 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.842466 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.842477 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.842487 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.842498 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.842508 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.842519 | orchestrator | 2025-07-04 18:26:33.842529 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-07-04 18:26:33.842540 | orchestrator | Friday 04 July 2025 18:23:31 +0000 (0:00:02.216) 0:02:28.643 *********** 2025-07-04 18:26:33.842551 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.842572 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.842583 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.842594 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.842605 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.842616 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.842627 | orchestrator | 2025-07-04 18:26:33.842638 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-07-04 18:26:33.842648 | orchestrator | Friday 04 July 2025 18:23:35 +0000 (0:00:04.222) 0:02:32.865 *********** 2025-07-04 18:26:33.842659 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.842670 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.842681 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.842692 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.842703 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.842714 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.842725 | orchestrator | 2025-07-04 18:26:33.842736 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-07-04 18:26:33.842747 | orchestrator | Friday 04 July 2025 18:23:37 +0000 (0:00:02.012) 0:02:34.877 *********** 2025-07-04 18:26:33.842758 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-04 18:26:33.842770 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.842801 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-04 18:26:33.842814 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.842825 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-04 18:26:33.842836 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.842847 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-04 18:26:33.842858 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.842869 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-04 18:26:33.842880 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.842891 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-04 18:26:33.842903 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.842914 | orchestrator | 2025-07-04 18:26:33.842925 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-07-04 18:26:33.842936 | orchestrator | Friday 04 July 2025 18:23:40 +0000 (0:00:02.675) 0:02:37.553 *********** 2025-07-04 18:26:33.842954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.842975 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.842988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.842999 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.843019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-04 18:26:33.843031 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.843044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.843062 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.843082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.843113 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.843139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-04 18:26:33.843152 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.843163 | orchestrator | 2025-07-04 18:26:33.843174 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-07-04 18:26:33.843185 | orchestrator | Friday 04 July 2025 18:23:42 +0000 (0:00:02.327) 0:02:39.880 *********** 2025-07-04 18:26:33.843196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.843215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.843229 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.843256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-04 18:26:33.843268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.843280 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-04 18:26:33.843291 | orchestrator | 2025-07-04 18:26:33.843303 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-04 18:26:33.843319 | orchestrator | Friday 04 July 2025 18:23:46 +0000 (0:00:04.554) 0:02:44.435 *********** 2025-07-04 18:26:33.843331 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:26:33.843341 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:26:33.843352 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:26:33.843363 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:26:33.843374 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:26:33.843384 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:26:33.843395 | orchestrator | 2025-07-04 18:26:33.843406 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-07-04 18:26:33.843417 | orchestrator | Friday 04 July 2025 18:23:47 +0000 (0:00:00.557) 0:02:44.993 *********** 2025-07-04 18:26:33.843428 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:26:33.843439 | orchestrator | 2025-07-04 18:26:33.843449 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-07-04 18:26:33.843460 | orchestrator | Friday 04 July 2025 18:23:49 +0000 (0:00:02.058) 0:02:47.051 *********** 2025-07-04 18:26:33.843471 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:26:33.843481 | orchestrator | 2025-07-04 18:26:33.843492 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-07-04 18:26:33.843503 | orchestrator | Friday 04 July 2025 18:23:51 +0000 (0:00:02.216) 0:02:49.267 *********** 2025-07-04 18:26:33.843521 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:26:33.843532 | orchestrator | 2025-07-04 18:26:33.843542 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-04 18:26:33.843554 | orchestrator | Friday 04 July 2025 18:24:32 +0000 (0:00:40.278) 0:03:29.546 *********** 2025-07-04 18:26:33.843564 | orchestrator | 2025-07-04 18:26:33.843576 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-04 18:26:33.843586 | orchestrator | Friday 04 July 2025 18:24:32 +0000 (0:00:00.174) 0:03:29.721 *********** 2025-07-04 18:26:33.843597 | orchestrator | 2025-07-04 18:26:33.843608 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-04 18:26:33.843618 | orchestrator | Friday 04 July 2025 18:24:32 +0000 (0:00:00.532) 0:03:30.254 *********** 2025-07-04 18:26:33.843629 | orchestrator | 2025-07-04 18:26:33.843640 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-04 18:26:33.843651 | orchestrator | Friday 04 July 2025 18:24:32 +0000 (0:00:00.187) 0:03:30.441 *********** 2025-07-04 18:26:33.843661 | orchestrator | 2025-07-04 18:26:33.843672 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-04 18:26:33.843683 | orchestrator | Friday 04 July 2025 18:24:33 +0000 (0:00:00.246) 0:03:30.688 *********** 2025-07-04 18:26:33.843694 | orchestrator | 2025-07-04 18:26:33.843704 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-04 18:26:33.843715 | orchestrator | Friday 04 July 2025 18:24:33 +0000 (0:00:00.258) 0:03:30.946 *********** 2025-07-04 18:26:33.843726 | orchestrator | 2025-07-04 18:26:33.843737 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-07-04 18:26:33.843748 | orchestrator | Friday 04 July 2025 18:24:33 +0000 (0:00:00.288) 0:03:31.234 *********** 2025-07-04 18:26:33.843759 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:26:33.843770 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:26:33.843815 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:26:33.843838 | orchestrator | 2025-07-04 18:26:33.843858 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-07-04 18:26:33.843877 | orchestrator | Friday 04 July 2025 18:25:07 +0000 (0:00:33.504) 0:04:04.739 *********** 2025-07-04 18:26:33.843889 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:26:33.843900 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:26:33.843916 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:26:33.843927 | orchestrator | 2025-07-04 18:26:33.843938 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:26:33.843949 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-04 18:26:33.843961 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-04 18:26:33.843972 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-04 18:26:33.843983 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-04 18:26:33.843995 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-04 18:26:33.844005 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-04 18:26:33.844016 | orchestrator | 2025-07-04 18:26:33.844027 | orchestrator | 2025-07-04 18:26:33.844037 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:26:33.844048 | orchestrator | Friday 04 July 2025 18:26:31 +0000 (0:01:23.894) 0:05:28.633 *********** 2025-07-04 18:26:33.844059 | orchestrator | =============================================================================== 2025-07-04 18:26:33.844077 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 83.89s 2025-07-04 18:26:33.844088 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.28s 2025-07-04 18:26:33.844099 | orchestrator | neutron : Restart neutron-server container ----------------------------- 33.50s 2025-07-04 18:26:33.844109 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.19s 2025-07-04 18:26:33.844128 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.86s 2025-07-04 18:26:33.844140 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.46s 2025-07-04 18:26:33.844150 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.62s 2025-07-04 18:26:33.844161 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.12s 2025-07-04 18:26:33.844172 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.81s 2025-07-04 18:26:33.844183 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.55s 2025-07-04 18:26:33.844194 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.53s 2025-07-04 18:26:33.844204 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.35s 2025-07-04 18:26:33.844215 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.30s 2025-07-04 18:26:33.844226 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 4.22s 2025-07-04 18:26:33.844237 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 4.11s 2025-07-04 18:26:33.844247 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.90s 2025-07-04 18:26:33.844258 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 3.87s 2025-07-04 18:26:33.844269 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 3.72s 2025-07-04 18:26:33.844280 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.71s 2025-07-04 18:26:33.844290 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.70s 2025-07-04 18:26:33.844301 | orchestrator | 2025-07-04 18:26:33 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:33.844312 | orchestrator | 2025-07-04 18:26:33 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:33.844323 | orchestrator | 2025-07-04 18:26:33 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:26:33.844334 | orchestrator | 2025-07-04 18:26:33 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:36.880264 | orchestrator | 2025-07-04 18:26:36 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:36.880363 | orchestrator | 2025-07-04 18:26:36 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:36.880946 | orchestrator | 2025-07-04 18:26:36 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:36.881736 | orchestrator | 2025-07-04 18:26:36 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:26:36.881763 | orchestrator | 2025-07-04 18:26:36 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:39.908077 | orchestrator | 2025-07-04 18:26:39 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:39.908278 | orchestrator | 2025-07-04 18:26:39 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:39.909132 | orchestrator | 2025-07-04 18:26:39 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:39.909598 | orchestrator | 2025-07-04 18:26:39 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:26:39.909652 | orchestrator | 2025-07-04 18:26:39 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:42.941040 | orchestrator | 2025-07-04 18:26:42 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:42.944130 | orchestrator | 2025-07-04 18:26:42 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:42.951550 | orchestrator | 2025-07-04 18:26:42 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:42.951598 | orchestrator | 2025-07-04 18:26:42 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:26:42.951608 | orchestrator | 2025-07-04 18:26:42 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:45.999928 | orchestrator | 2025-07-04 18:26:45 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:46.002423 | orchestrator | 2025-07-04 18:26:46 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:46.004262 | orchestrator | 2025-07-04 18:26:46 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:46.004729 | orchestrator | 2025-07-04 18:26:46 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:26:46.004755 | orchestrator | 2025-07-04 18:26:46 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:49.045803 | orchestrator | 2025-07-04 18:26:49 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:49.047485 | orchestrator | 2025-07-04 18:26:49 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:49.049293 | orchestrator | 2025-07-04 18:26:49 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:49.051562 | orchestrator | 2025-07-04 18:26:49 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:26:49.051605 | orchestrator | 2025-07-04 18:26:49 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:52.090413 | orchestrator | 2025-07-04 18:26:52 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:52.092434 | orchestrator | 2025-07-04 18:26:52 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:52.094879 | orchestrator | 2025-07-04 18:26:52 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:52.096881 | orchestrator | 2025-07-04 18:26:52 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:26:52.096927 | orchestrator | 2025-07-04 18:26:52 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:55.144535 | orchestrator | 2025-07-04 18:26:55 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:55.145813 | orchestrator | 2025-07-04 18:26:55 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:55.147066 | orchestrator | 2025-07-04 18:26:55 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:55.148912 | orchestrator | 2025-07-04 18:26:55 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:26:55.148937 | orchestrator | 2025-07-04 18:26:55 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:26:58.206181 | orchestrator | 2025-07-04 18:26:58 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:26:58.210126 | orchestrator | 2025-07-04 18:26:58 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:26:58.212399 | orchestrator | 2025-07-04 18:26:58 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:26:58.215319 | orchestrator | 2025-07-04 18:26:58 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:26:58.215509 | orchestrator | 2025-07-04 18:26:58 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:01.271037 | orchestrator | 2025-07-04 18:27:01 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:01.271524 | orchestrator | 2025-07-04 18:27:01 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:01.276205 | orchestrator | 2025-07-04 18:27:01 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:27:01.280141 | orchestrator | 2025-07-04 18:27:01 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:01.280181 | orchestrator | 2025-07-04 18:27:01 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:04.316288 | orchestrator | 2025-07-04 18:27:04 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:04.319194 | orchestrator | 2025-07-04 18:27:04 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:04.323021 | orchestrator | 2025-07-04 18:27:04 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:27:04.324881 | orchestrator | 2025-07-04 18:27:04 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:04.325169 | orchestrator | 2025-07-04 18:27:04 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:07.378692 | orchestrator | 2025-07-04 18:27:07 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:07.381993 | orchestrator | 2025-07-04 18:27:07 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:07.383848 | orchestrator | 2025-07-04 18:27:07 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:27:07.384905 | orchestrator | 2025-07-04 18:27:07 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:07.384936 | orchestrator | 2025-07-04 18:27:07 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:10.421452 | orchestrator | 2025-07-04 18:27:10 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:10.421869 | orchestrator | 2025-07-04 18:27:10 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:10.422491 | orchestrator | 2025-07-04 18:27:10 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:27:10.423427 | orchestrator | 2025-07-04 18:27:10 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:10.424443 | orchestrator | 2025-07-04 18:27:10 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:13.465347 | orchestrator | 2025-07-04 18:27:13 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:13.465659 | orchestrator | 2025-07-04 18:27:13 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:13.465688 | orchestrator | 2025-07-04 18:27:13 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:27:13.465700 | orchestrator | 2025-07-04 18:27:13 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:13.465712 | orchestrator | 2025-07-04 18:27:13 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:16.511795 | orchestrator | 2025-07-04 18:27:16 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:16.512073 | orchestrator | 2025-07-04 18:27:16 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:16.512121 | orchestrator | 2025-07-04 18:27:16 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:27:16.512585 | orchestrator | 2025-07-04 18:27:16 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:16.512618 | orchestrator | 2025-07-04 18:27:16 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:19.537696 | orchestrator | 2025-07-04 18:27:19 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:19.538009 | orchestrator | 2025-07-04 18:27:19 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:19.538959 | orchestrator | 2025-07-04 18:27:19 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:27:19.539832 | orchestrator | 2025-07-04 18:27:19 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:19.540140 | orchestrator | 2025-07-04 18:27:19 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:22.586242 | orchestrator | 2025-07-04 18:27:22 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:22.588457 | orchestrator | 2025-07-04 18:27:22 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:22.590569 | orchestrator | 2025-07-04 18:27:22 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:27:22.593895 | orchestrator | 2025-07-04 18:27:22 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:22.593921 | orchestrator | 2025-07-04 18:27:22 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:25.633863 | orchestrator | 2025-07-04 18:27:25 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:25.633955 | orchestrator | 2025-07-04 18:27:25 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:25.634814 | orchestrator | 2025-07-04 18:27:25 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state STARTED 2025-07-04 18:27:25.635390 | orchestrator | 2025-07-04 18:27:25 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:25.635797 | orchestrator | 2025-07-04 18:27:25 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:28.668851 | orchestrator | 2025-07-04 18:27:28 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:28.669752 | orchestrator | 2025-07-04 18:27:28 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:28.674382 | orchestrator | 2025-07-04 18:27:28 | INFO  | Task 113ea2de-0d8f-4652-bd54-17e6ffe199be is in state SUCCESS 2025-07-04 18:27:28.676000 | orchestrator | 2025-07-04 18:27:28.676024 | orchestrator | 2025-07-04 18:27:28.676031 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:27:28.676211 | orchestrator | 2025-07-04 18:27:28.676217 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:27:28.676221 | orchestrator | Friday 04 July 2025 18:23:53 +0000 (0:00:00.288) 0:00:00.288 *********** 2025-07-04 18:27:28.676225 | orchestrator | ok: [testbed-manager] 2025-07-04 18:27:28.676230 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:27:28.676234 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:27:28.676253 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:27:28.676257 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:27:28.676260 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:27:28.676264 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:27:28.676268 | orchestrator | 2025-07-04 18:27:28.676282 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:27:28.676286 | orchestrator | Friday 04 July 2025 18:23:54 +0000 (0:00:01.007) 0:00:01.296 *********** 2025-07-04 18:27:28.676291 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-07-04 18:27:28.676295 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-07-04 18:27:28.676299 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-07-04 18:27:28.676305 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-07-04 18:27:28.676327 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-07-04 18:27:28.676334 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-07-04 18:27:28.676340 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-07-04 18:27:28.676346 | orchestrator | 2025-07-04 18:27:28.676352 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-07-04 18:27:28.676358 | orchestrator | 2025-07-04 18:27:28.676364 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-04 18:27:28.676370 | orchestrator | Friday 04 July 2025 18:23:54 +0000 (0:00:00.705) 0:00:02.002 *********** 2025-07-04 18:27:28.676377 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:27:28.676384 | orchestrator | 2025-07-04 18:27:28.676390 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-07-04 18:27:28.676449 | orchestrator | Friday 04 July 2025 18:23:56 +0000 (0:00:01.348) 0:00:03.350 *********** 2025-07-04 18:27:28.676458 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-04 18:27:28.676472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.676479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.676486 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.676743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.676754 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.676760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.676767 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.676774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.676784 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.676791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.676810 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.676821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.676827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.676835 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-04 18:27:28.676842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.676851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.676858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.676880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.676886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.676893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.676899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.676906 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.676914 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.676921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.676931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.676949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.676956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.676962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.676968 | orchestrator | 2025-07-04 18:27:28.676975 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-04 18:27:28.677395 | orchestrator | Friday 04 July 2025 18:23:59 +0000 (0:00:03.131) 0:00:06.482 *********** 2025-07-04 18:27:28.677402 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:27:28.677406 | orchestrator | 2025-07-04 18:27:28.677410 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-07-04 18:27:28.677414 | orchestrator | Friday 04 July 2025 18:24:00 +0000 (0:00:01.630) 0:00:08.112 *********** 2025-07-04 18:27:28.677418 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-04 18:27:28.677427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.677436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.677453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.677458 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.677462 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.677465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.677469 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.677473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.677482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.677486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.677499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.677504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.677508 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.677511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.677515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.677524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.677528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.677532 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.677544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.677549 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-04 18:27:28.677553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.677557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.677566 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.677570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.677582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.677586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.677590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.677594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.677598 | orchestrator | 2025-07-04 18:27:28.677602 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-07-04 18:27:28.677606 | orchestrator | Friday 04 July 2025 18:24:06 +0000 (0:00:06.113) 0:00:14.226 *********** 2025-07-04 18:27:28.677610 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-04 18:27:28.677620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.677624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.677637 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.677647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.677694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.677698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.677702 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.677712 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-04 18:27:28.677716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.677720 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:28.677754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.677760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.677764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.677786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.677805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.677813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.677887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.677893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.677928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.677934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.677952 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:27:28.677957 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:28.677962 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:28.677969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.677980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.677987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.677993 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:27:28.678003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.678009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678108 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:27:28.678115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.678122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678135 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:27:28.678139 | orchestrator | 2025-07-04 18:27:28.678143 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-07-04 18:27:28.678147 | orchestrator | Friday 04 July 2025 18:24:09 +0000 (0:00:02.401) 0:00:16.627 *********** 2025-07-04 18:27:28.678151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.678157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.678162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.678177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.678189 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-04 18:27:28.678200 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.678208 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678218 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-04 18:27:28.678242 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.678249 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:28.678256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.678275 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:27:28.678280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.678285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.678288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.678296 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:28.678302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.678306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.678320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.678325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-04 18:27:28.678335 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:28.678339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.678343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678355 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:27:28.678359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.678371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678383 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:27:28.678387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-04 18:27:28.678390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-04 18:27:28.678398 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:27:28.678402 | orchestrator | 2025-07-04 18:27:28.678406 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-07-04 18:27:28.678410 | orchestrator | Friday 04 July 2025 18:24:12 +0000 (0:00:02.941) 0:00:19.569 *********** 2025-07-04 18:27:28.678416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.678424 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-04 18:27:28.678444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.678456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.678463 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.678469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.678476 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.678482 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.678491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.678497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.678519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.678525 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.678532 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.678538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.678545 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.678554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.678561 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.678587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.678595 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-04 18:27:28.678602 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.678608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.678615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.678624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.678630 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.678655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.678662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.678668 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.678675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.678682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.678688 | orchestrator | 2025-07-04 18:27:28.678694 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-07-04 18:27:28.678701 | orchestrator | Friday 04 July 2025 18:24:20 +0000 (0:00:08.048) 0:00:27.618 *********** 2025-07-04 18:27:28.678707 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-04 18:27:28.678713 | orchestrator | 2025-07-04 18:27:28.678719 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-07-04 18:27:28.678808 | orchestrator | Friday 04 July 2025 18:24:21 +0000 (0:00:00.891) 0:00:28.509 *********** 2025-07-04 18:27:28.678820 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090082, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678831 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090076, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678854 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090082, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678861 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090082, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678868 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090082, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.678875 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090082, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678881 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090082, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678890 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090066, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.701158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678900 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090076, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678921 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090076, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678928 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090082, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678935 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090076, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678941 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090076, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678947 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090076, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.678962 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090067, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7021582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678969 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090066, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.701158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.678990 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090066, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.701158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679034 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090066, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.701158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679042 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090076, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679049 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090067, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7021582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679055 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090074, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679069 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090067, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7021582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679075 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090066, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.701158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679098 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090066, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.701158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679104 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090066, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.701158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.679110 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090074, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679116 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090074, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679123 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090067, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7021582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679164 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090067, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7021582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679171 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090069, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7051582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679194 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090067, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7021582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679202 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090069, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7051582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679208 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090069, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7051582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679214 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090074, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679221 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090074, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679236 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090073, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679243 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090073, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679264 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090074, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679271 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090067, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7021582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.679277 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090069, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7051582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679284 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090069, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7051582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679294 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090073, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679302 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090073, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679309 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090077, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679316 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090069, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7051582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679338 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090077, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679345 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090073, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679352 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090077, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679361 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090074, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.679368 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090081, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679377 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090073, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679384 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090077, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679405 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090077, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679413 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090081, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679419 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090081, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679429 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090091, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679435 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090081, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679443 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090081, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679449 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090091, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679469 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090077, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679475 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090079, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7121582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679481 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090091, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679491 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090091, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679498 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090069, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7051582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.679507 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090091, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679513 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090079, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7121582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679535 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090081, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679543 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090079, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7121582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679555 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090068, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7031581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679561 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090079, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7121582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679567 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090079, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7121582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679576 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090068, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7031581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679583 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090072, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7081583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679602 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090068, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7031581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679609 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090091, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679619 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090073, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7091582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.679626 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090068, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7031581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679632 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090068, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7031581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679641 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090072, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7081583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679647 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090065, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.700158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679656 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090072, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7081583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679662 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090072, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7081583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679673 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090079, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7121582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679679 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090065, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.700158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679686 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090072, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7081583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679695 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090075, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679701 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090065, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.700158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679712 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090077, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.679722 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090068, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7031581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679759 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090075, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679766 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090065, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.700158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679772 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090090, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679781 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090065, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.700158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679788 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090072, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7081583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679797 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090090, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679808 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090075, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679815 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090071, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7061582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679821 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090075, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679828 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090081, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7131584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.679836 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090071, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7061582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679843 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090075, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679853 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090090, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679863 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090065, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.700158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679869 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090083, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7141583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679875 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:28.679882 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090071, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7061582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679889 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090083, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7141583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679898 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090090, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679905 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:28.679911 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090090, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679925 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090083, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7141583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679932 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:28.679938 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090075, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679945 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090071, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7061582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679951 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090071, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7061582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679957 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090090, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679966 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090091, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.679973 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090083, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7141583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679982 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:27:28.679992 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090083, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7141583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.679998 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:27:28.680004 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090071, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7061582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.680011 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090079, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7121582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.680017 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090083, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7141583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-04 18:27:28.680023 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:27:28.680030 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090068, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7031581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.680040 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090072, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7081583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.680049 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090065, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.700158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.680058 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090075, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7111583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.680064 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090090, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7201583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.680071 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090071, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7061582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.680077 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090083, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.7141583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-04 18:27:28.680084 | orchestrator | 2025-07-04 18:27:28.680089 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-07-04 18:27:28.680095 | orchestrator | Friday 04 July 2025 18:24:54 +0000 (0:00:33.482) 0:01:01.992 *********** 2025-07-04 18:27:28.680101 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-04 18:27:28.680107 | orchestrator | 2025-07-04 18:27:28.680113 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-07-04 18:27:28.680119 | orchestrator | Friday 04 July 2025 18:24:55 +0000 (0:00:00.674) 0:01:02.667 *********** 2025-07-04 18:27:28.680125 | orchestrator | [WARNING]: Skipped 2025-07-04 18:27:28.680131 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680137 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-07-04 18:27:28.680144 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680150 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-07-04 18:27:28.680162 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-04 18:27:28.680168 | orchestrator | [WARNING]: Skipped 2025-07-04 18:27:28.680174 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680180 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-07-04 18:27:28.680186 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680192 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-07-04 18:27:28.680198 | orchestrator | [WARNING]: Skipped 2025-07-04 18:27:28.680204 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680210 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-07-04 18:27:28.680216 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680222 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-07-04 18:27:28.680228 | orchestrator | [WARNING]: Skipped 2025-07-04 18:27:28.680234 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680240 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-07-04 18:27:28.680246 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680252 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-07-04 18:27:28.680257 | orchestrator | [WARNING]: Skipped 2025-07-04 18:27:28.680263 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680269 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-07-04 18:27:28.680278 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680284 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-07-04 18:27:28.680290 | orchestrator | [WARNING]: Skipped 2025-07-04 18:27:28.680296 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680302 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-07-04 18:27:28.680308 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680314 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-07-04 18:27:28.680320 | orchestrator | [WARNING]: Skipped 2025-07-04 18:27:28.680326 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680332 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-07-04 18:27:28.680338 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-04 18:27:28.680344 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-07-04 18:27:28.680350 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 18:27:28.680356 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-04 18:27:28.680362 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-04 18:27:28.680368 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-04 18:27:28.680374 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-04 18:27:28.680380 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-04 18:27:28.680386 | orchestrator | 2025-07-04 18:27:28.680392 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-07-04 18:27:28.680398 | orchestrator | Friday 04 July 2025 18:24:56 +0000 (0:00:01.563) 0:01:04.230 *********** 2025-07-04 18:27:28.680404 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-04 18:27:28.680410 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-04 18:27:28.680416 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:28.680422 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-04 18:27:28.680432 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-04 18:27:28.680438 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:28.680444 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:27:28.680450 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:28.680456 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-04 18:27:28.680462 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:27:28.680468 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-04 18:27:28.680473 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:27:28.680479 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-07-04 18:27:28.680486 | orchestrator | 2025-07-04 18:27:28.680491 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-07-04 18:27:28.680497 | orchestrator | Friday 04 July 2025 18:25:12 +0000 (0:00:15.311) 0:01:19.541 *********** 2025-07-04 18:27:28.680503 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-04 18:27:28.680509 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:28.680515 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-04 18:27:28.680521 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:28.680527 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-04 18:27:28.680533 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:28.680539 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-04 18:27:28.680545 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:27:28.680554 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-04 18:27:28.680560 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:27:28.680566 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-04 18:27:28.680572 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:27:28.680578 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-07-04 18:27:28.680584 | orchestrator | 2025-07-04 18:27:28.680590 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-07-04 18:27:28.680596 | orchestrator | Friday 04 July 2025 18:25:15 +0000 (0:00:03.511) 0:01:23.053 *********** 2025-07-04 18:27:28.680602 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-04 18:27:28.680609 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:28.680615 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-04 18:27:28.680621 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:28.680627 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-04 18:27:28.680633 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:27:28.680642 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-07-04 18:27:28.680648 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-04 18:27:28.680654 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:28.680659 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-04 18:27:28.680665 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:27:28.680671 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-04 18:27:28.680680 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:27:28.680686 | orchestrator | 2025-07-04 18:27:28.680692 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-07-04 18:27:28.680698 | orchestrator | Friday 04 July 2025 18:25:17 +0000 (0:00:01.777) 0:01:24.830 *********** 2025-07-04 18:27:28.680705 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-04 18:27:28.680712 | orchestrator | 2025-07-04 18:27:28.680718 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-07-04 18:27:28.680732 | orchestrator | Friday 04 July 2025 18:25:18 +0000 (0:00:01.301) 0:01:26.132 *********** 2025-07-04 18:27:28.680740 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:27:28.680746 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:28.680752 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:28.680758 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:28.680764 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:27:28.680768 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:27:28.680772 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:27:28.680775 | orchestrator | 2025-07-04 18:27:28.680779 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-07-04 18:27:28.680783 | orchestrator | Friday 04 July 2025 18:25:20 +0000 (0:00:01.347) 0:01:27.479 *********** 2025-07-04 18:27:28.680786 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:27:28.680790 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:27:28.680794 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:27:28.680797 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:27:28.680801 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:28.680805 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:27:28.680808 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:27:28.680812 | orchestrator | 2025-07-04 18:27:28.680815 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-07-04 18:27:28.680819 | orchestrator | Friday 04 July 2025 18:25:22 +0000 (0:00:02.622) 0:01:30.101 *********** 2025-07-04 18:27:28.680823 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-04 18:27:28.680827 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-04 18:27:28.680830 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-04 18:27:28.680834 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:28.680838 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:28.680841 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:27:28.680845 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-04 18:27:28.680848 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:27:28.680852 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-04 18:27:28.680856 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:28.680859 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-04 18:27:28.680863 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:27:28.680867 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-04 18:27:28.680871 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:27:28.680874 | orchestrator | 2025-07-04 18:27:28.680878 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-07-04 18:27:28.680884 | orchestrator | Friday 04 July 2025 18:25:26 +0000 (0:00:03.256) 0:01:33.357 *********** 2025-07-04 18:27:28.680888 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-07-04 18:27:28.680892 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-04 18:27:28.680899 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:28.680903 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-04 18:27:28.680907 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:28.680910 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-04 18:27:28.680914 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:28.680918 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-04 18:27:28.680921 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:27:28.680925 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-04 18:27:28.680929 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:27:28.680933 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-04 18:27:28.680936 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:27:28.680940 | orchestrator | 2025-07-04 18:27:28.680944 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-07-04 18:27:28.680950 | orchestrator | Friday 04 July 2025 18:25:28 +0000 (0:00:02.652) 0:01:36.010 *********** 2025-07-04 18:27:28.680954 | orchestrator | [WARNING]: Skipped 2025-07-04 18:27:28.680958 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-07-04 18:27:28.680961 | orchestrator | due to this access issue: 2025-07-04 18:27:28.680965 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-07-04 18:27:28.680969 | orchestrator | not a directory 2025-07-04 18:27:28.680973 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-04 18:27:28.680976 | orchestrator | 2025-07-04 18:27:28.680980 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-07-04 18:27:28.680984 | orchestrator | Friday 04 July 2025 18:25:30 +0000 (0:00:01.914) 0:01:37.924 *********** 2025-07-04 18:27:28.680987 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:27:28.680991 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:28.680995 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:28.680998 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:28.681002 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:27:28.681006 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:27:28.681009 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:27:28.681013 | orchestrator | 2025-07-04 18:27:28.681017 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-07-04 18:27:28.681021 | orchestrator | Friday 04 July 2025 18:25:31 +0000 (0:00:01.090) 0:01:39.015 *********** 2025-07-04 18:27:28.681024 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:27:28.681028 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:28.681032 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:28.681035 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:28.681039 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:27:28.681043 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:27:28.681046 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:27:28.681050 | orchestrator | 2025-07-04 18:27:28.681054 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-07-04 18:27:28.681058 | orchestrator | Friday 04 July 2025 18:25:32 +0000 (0:00:00.920) 0:01:39.935 *********** 2025-07-04 18:27:28.681064 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-04 18:27:28.681075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.681085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.681091 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.681101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.681109 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.681116 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.681120 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-04 18:27:28.681127 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.681131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.681138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.681142 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.681148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.681152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.681156 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.681164 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-04 18:27:28.681171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.681175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.681179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.681185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.681189 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.681192 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.681207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.681211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.681217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.681221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-04 18:27:28.681227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.681231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.681235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-04 18:27:28.681242 | orchestrator | 2025-07-04 18:27:28.681246 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-07-04 18:27:28.681250 | orchestrator | Friday 04 July 2025 18:25:37 +0000 (0:00:04.587) 0:01:44.523 *********** 2025-07-04 18:27:28.681254 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-04 18:27:28.681258 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:27:28.681261 | orchestrator | 2025-07-04 18:27:28.681265 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-04 18:27:28.681269 | orchestrator | Friday 04 July 2025 18:25:38 +0000 (0:00:01.403) 0:01:45.926 *********** 2025-07-04 18:27:28.681272 | orchestrator | 2025-07-04 18:27:28.681276 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-04 18:27:28.681280 | orchestrator | Friday 04 July 2025 18:25:38 +0000 (0:00:00.210) 0:01:46.136 *********** 2025-07-04 18:27:28.681283 | orchestrator | 2025-07-04 18:27:28.681287 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-04 18:27:28.681291 | orchestrator | Friday 04 July 2025 18:25:38 +0000 (0:00:00.065) 0:01:46.202 *********** 2025-07-04 18:27:28.681295 | orchestrator | 2025-07-04 18:27:28.681298 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-04 18:27:28.681302 | orchestrator | Friday 04 July 2025 18:25:39 +0000 (0:00:00.061) 0:01:46.264 *********** 2025-07-04 18:27:28.681306 | orchestrator | 2025-07-04 18:27:28.681309 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-04 18:27:28.681313 | orchestrator | Friday 04 July 2025 18:25:39 +0000 (0:00:00.061) 0:01:46.325 *********** 2025-07-04 18:27:28.681316 | orchestrator | 2025-07-04 18:27:28.681320 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-04 18:27:28.681324 | orchestrator | Friday 04 July 2025 18:25:39 +0000 (0:00:00.065) 0:01:46.391 *********** 2025-07-04 18:27:28.681328 | orchestrator | 2025-07-04 18:27:28.681331 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-04 18:27:28.681335 | orchestrator | Friday 04 July 2025 18:25:39 +0000 (0:00:00.076) 0:01:46.467 *********** 2025-07-04 18:27:28.681339 | orchestrator | 2025-07-04 18:27:28.681342 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-07-04 18:27:28.681346 | orchestrator | Friday 04 July 2025 18:25:39 +0000 (0:00:00.112) 0:01:46.580 *********** 2025-07-04 18:27:28.681350 | orchestrator | changed: [testbed-manager] 2025-07-04 18:27:28.681353 | orchestrator | 2025-07-04 18:27:28.681357 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-07-04 18:27:28.681362 | orchestrator | Friday 04 July 2025 18:25:55 +0000 (0:00:16.336) 0:02:02.917 *********** 2025-07-04 18:27:28.681366 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:28.681370 | orchestrator | changed: [testbed-manager] 2025-07-04 18:27:28.681373 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:27:28.681377 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:27:28.681381 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:27:28.681384 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:27:28.681388 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:27:28.681392 | orchestrator | 2025-07-04 18:27:28.681395 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-07-04 18:27:28.681399 | orchestrator | Friday 04 July 2025 18:26:12 +0000 (0:00:16.954) 0:02:19.871 *********** 2025-07-04 18:27:28.681403 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:28.681406 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:27:28.681410 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:27:28.681414 | orchestrator | 2025-07-04 18:27:28.681417 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-07-04 18:27:28.681421 | orchestrator | Friday 04 July 2025 18:26:23 +0000 (0:00:10.822) 0:02:30.693 *********** 2025-07-04 18:27:28.681425 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:27:28.681431 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:27:28.681434 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:28.681438 | orchestrator | 2025-07-04 18:27:28.681442 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-07-04 18:27:28.681446 | orchestrator | Friday 04 July 2025 18:26:35 +0000 (0:00:11.836) 0:02:42.530 *********** 2025-07-04 18:27:28.681449 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:27:28.681453 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:27:28.681457 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:27:28.681460 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:27:28.681464 | orchestrator | changed: [testbed-manager] 2025-07-04 18:27:28.681470 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:28.681474 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:27:28.681477 | orchestrator | 2025-07-04 18:27:28.681481 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-07-04 18:27:28.681485 | orchestrator | Friday 04 July 2025 18:26:52 +0000 (0:00:17.656) 0:03:00.187 *********** 2025-07-04 18:27:28.681489 | orchestrator | changed: [testbed-manager] 2025-07-04 18:27:28.681492 | orchestrator | 2025-07-04 18:27:28.681496 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-07-04 18:27:28.681500 | orchestrator | Friday 04 July 2025 18:27:00 +0000 (0:00:07.479) 0:03:07.667 *********** 2025-07-04 18:27:28.681503 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:28.681507 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:27:28.681511 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:27:28.681515 | orchestrator | 2025-07-04 18:27:28.681518 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-07-04 18:27:28.681522 | orchestrator | Friday 04 July 2025 18:27:10 +0000 (0:00:09.781) 0:03:17.448 *********** 2025-07-04 18:27:28.681526 | orchestrator | changed: [testbed-manager] 2025-07-04 18:27:28.681529 | orchestrator | 2025-07-04 18:27:28.681533 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-07-04 18:27:28.681537 | orchestrator | Friday 04 July 2025 18:27:15 +0000 (0:00:04.977) 0:03:22.426 *********** 2025-07-04 18:27:28.681540 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:27:28.681544 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:27:28.681548 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:27:28.681551 | orchestrator | 2025-07-04 18:27:28.681555 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:27:28.681559 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-04 18:27:28.681563 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-04 18:27:28.681566 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-04 18:27:28.681570 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-04 18:27:28.681574 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-04 18:27:28.681578 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-04 18:27:28.681581 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-04 18:27:28.681585 | orchestrator | 2025-07-04 18:27:28.681589 | orchestrator | 2025-07-04 18:27:28.681593 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:27:28.681596 | orchestrator | Friday 04 July 2025 18:27:28 +0000 (0:00:13.040) 0:03:35.466 *********** 2025-07-04 18:27:28.681602 | orchestrator | =============================================================================== 2025-07-04 18:27:28.681606 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 33.48s 2025-07-04 18:27:28.681610 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.66s 2025-07-04 18:27:28.681613 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.95s 2025-07-04 18:27:28.681617 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.34s 2025-07-04 18:27:28.681623 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.31s 2025-07-04 18:27:28.681627 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 13.04s 2025-07-04 18:27:28.681630 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.84s 2025-07-04 18:27:28.681634 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.82s 2025-07-04 18:27:28.681638 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.78s 2025-07-04 18:27:28.681641 | orchestrator | prometheus : Copying over config.json files ----------------------------- 8.05s 2025-07-04 18:27:28.681645 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.48s 2025-07-04 18:27:28.681649 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.11s 2025-07-04 18:27:28.681653 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.98s 2025-07-04 18:27:28.681656 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.59s 2025-07-04 18:27:28.681660 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.51s 2025-07-04 18:27:28.681664 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.26s 2025-07-04 18:27:28.681667 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.13s 2025-07-04 18:27:28.681671 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.94s 2025-07-04 18:27:28.681675 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.65s 2025-07-04 18:27:28.681679 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.62s 2025-07-04 18:27:28.681684 | orchestrator | 2025-07-04 18:27:28 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:28.681688 | orchestrator | 2025-07-04 18:27:28 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:31.712880 | orchestrator | 2025-07-04 18:27:31 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:27:31.714851 | orchestrator | 2025-07-04 18:27:31 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:31.716995 | orchestrator | 2025-07-04 18:27:31 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:31.720074 | orchestrator | 2025-07-04 18:27:31 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:31.720647 | orchestrator | 2025-07-04 18:27:31 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:34.761879 | orchestrator | 2025-07-04 18:27:34 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:27:34.763587 | orchestrator | 2025-07-04 18:27:34 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:34.765612 | orchestrator | 2025-07-04 18:27:34 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:34.766974 | orchestrator | 2025-07-04 18:27:34 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:34.767083 | orchestrator | 2025-07-04 18:27:34 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:37.811572 | orchestrator | 2025-07-04 18:27:37 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:27:37.813834 | orchestrator | 2025-07-04 18:27:37 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:37.815666 | orchestrator | 2025-07-04 18:27:37 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:37.817203 | orchestrator | 2025-07-04 18:27:37 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:37.817246 | orchestrator | 2025-07-04 18:27:37 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:40.859420 | orchestrator | 2025-07-04 18:27:40 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:27:40.862166 | orchestrator | 2025-07-04 18:27:40 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:40.864197 | orchestrator | 2025-07-04 18:27:40 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state STARTED 2025-07-04 18:27:40.865505 | orchestrator | 2025-07-04 18:27:40 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:40.866315 | orchestrator | 2025-07-04 18:27:40 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:43.901935 | orchestrator | 2025-07-04 18:27:43 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:27:43.902587 | orchestrator | 2025-07-04 18:27:43 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:27:43.904013 | orchestrator | 2025-07-04 18:27:43 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:43.905915 | orchestrator | 2025-07-04 18:27:43 | INFO  | Task 174a19b7-c43b-4ce2-aed5-137b4ef219c3 is in state SUCCESS 2025-07-04 18:27:43.907504 | orchestrator | 2025-07-04 18:27:43.907562 | orchestrator | 2025-07-04 18:27:43.907584 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:27:43.907603 | orchestrator | 2025-07-04 18:27:43.907619 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:27:43.907637 | orchestrator | Friday 04 July 2025 18:24:53 +0000 (0:00:00.382) 0:00:00.382 *********** 2025-07-04 18:27:43.907654 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:27:43.907671 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:27:43.907689 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:27:43.907727 | orchestrator | 2025-07-04 18:27:43.907748 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:27:43.907766 | orchestrator | Friday 04 July 2025 18:24:53 +0000 (0:00:00.393) 0:00:00.775 *********** 2025-07-04 18:27:43.907784 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-07-04 18:27:43.907802 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-07-04 18:27:43.907820 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-07-04 18:27:43.907838 | orchestrator | 2025-07-04 18:27:43.907858 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-07-04 18:27:43.907870 | orchestrator | 2025-07-04 18:27:43.907881 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-04 18:27:43.907892 | orchestrator | Friday 04 July 2025 18:24:54 +0000 (0:00:00.473) 0:00:01.249 *********** 2025-07-04 18:27:43.907902 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:27:43.907914 | orchestrator | 2025-07-04 18:27:43.907924 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-07-04 18:27:43.907935 | orchestrator | Friday 04 July 2025 18:24:54 +0000 (0:00:00.544) 0:00:01.793 *********** 2025-07-04 18:27:43.907946 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-07-04 18:27:43.907956 | orchestrator | 2025-07-04 18:27:43.907967 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-07-04 18:27:43.907999 | orchestrator | Friday 04 July 2025 18:24:57 +0000 (0:00:03.042) 0:00:04.835 *********** 2025-07-04 18:27:43.908011 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-07-04 18:27:43.908022 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-07-04 18:27:43.908033 | orchestrator | 2025-07-04 18:27:43.908043 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-07-04 18:27:43.908056 | orchestrator | Friday 04 July 2025 18:25:03 +0000 (0:00:05.590) 0:00:10.425 *********** 2025-07-04 18:27:43.908069 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-04 18:27:43.908082 | orchestrator | 2025-07-04 18:27:43.908094 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-07-04 18:27:43.908106 | orchestrator | Friday 04 July 2025 18:25:06 +0000 (0:00:02.847) 0:00:13.273 *********** 2025-07-04 18:27:43.908118 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-04 18:27:43.908130 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-07-04 18:27:43.908142 | orchestrator | 2025-07-04 18:27:43.908154 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-07-04 18:27:43.908166 | orchestrator | Friday 04 July 2025 18:25:09 +0000 (0:00:03.877) 0:00:17.151 *********** 2025-07-04 18:27:43.908178 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-04 18:27:43.908190 | orchestrator | 2025-07-04 18:27:43.908203 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-07-04 18:27:43.908215 | orchestrator | Friday 04 July 2025 18:25:13 +0000 (0:00:03.571) 0:00:20.722 *********** 2025-07-04 18:27:43.908227 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-07-04 18:27:43.908239 | orchestrator | 2025-07-04 18:27:43.908251 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-07-04 18:27:43.908263 | orchestrator | Friday 04 July 2025 18:25:17 +0000 (0:00:04.207) 0:00:24.930 *********** 2025-07-04 18:27:43.908309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:27:43.908328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:27:43.908351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:27:43.908364 | orchestrator | 2025-07-04 18:27:43.908377 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-04 18:27:43.908395 | orchestrator | Friday 04 July 2025 18:25:22 +0000 (0:00:05.099) 0:00:30.030 *********** 2025-07-04 18:27:43.908414 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:27:43.908425 | orchestrator | 2025-07-04 18:27:43.908436 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-07-04 18:27:43.908446 | orchestrator | Friday 04 July 2025 18:25:24 +0000 (0:00:01.216) 0:00:31.246 *********** 2025-07-04 18:27:43.908457 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:27:43.908468 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:27:43.908479 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:43.908496 | orchestrator | 2025-07-04 18:27:43.908507 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-07-04 18:27:43.908518 | orchestrator | Friday 04 July 2025 18:25:30 +0000 (0:00:06.582) 0:00:37.828 *********** 2025-07-04 18:27:43.908528 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-04 18:27:43.908540 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-04 18:27:43.908550 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-04 18:27:43.908561 | orchestrator | 2025-07-04 18:27:43.908572 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-07-04 18:27:43.908582 | orchestrator | Friday 04 July 2025 18:25:32 +0000 (0:00:01.799) 0:00:39.628 *********** 2025-07-04 18:27:43.908593 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-04 18:27:43.908604 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-04 18:27:43.908614 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-04 18:27:43.908625 | orchestrator | 2025-07-04 18:27:43.908636 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-07-04 18:27:43.908647 | orchestrator | Friday 04 July 2025 18:25:33 +0000 (0:00:01.473) 0:00:41.101 *********** 2025-07-04 18:27:43.908657 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:27:43.908668 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:27:43.908679 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:27:43.908689 | orchestrator | 2025-07-04 18:27:43.908700 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-07-04 18:27:43.908756 | orchestrator | Friday 04 July 2025 18:25:34 +0000 (0:00:00.967) 0:00:42.068 *********** 2025-07-04 18:27:43.908768 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:43.908779 | orchestrator | 2025-07-04 18:27:43.908790 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-07-04 18:27:43.908801 | orchestrator | Friday 04 July 2025 18:25:34 +0000 (0:00:00.131) 0:00:42.200 *********** 2025-07-04 18:27:43.908811 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:43.908822 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:43.908833 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:43.908844 | orchestrator | 2025-07-04 18:27:43.908854 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-04 18:27:43.908865 | orchestrator | Friday 04 July 2025 18:25:35 +0000 (0:00:00.326) 0:00:42.526 *********** 2025-07-04 18:27:43.908876 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:27:43.908886 | orchestrator | 2025-07-04 18:27:43.908897 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-07-04 18:27:43.908908 | orchestrator | Friday 04 July 2025 18:25:35 +0000 (0:00:00.602) 0:00:43.128 *********** 2025-07-04 18:27:43.908932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:27:43.908953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:27:43.908966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:27:43.908984 | orchestrator | 2025-07-04 18:27:43.908995 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-07-04 18:27:43.909006 | orchestrator | Friday 04 July 2025 18:25:39 +0000 (0:00:04.043) 0:00:47.172 *********** 2025-07-04 18:27:43.909031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-04 18:27:43.909044 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:43.909056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-04 18:27:43.909074 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:43.909106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-04 18:27:43.909119 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:43.909129 | orchestrator | 2025-07-04 18:27:43.909140 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-07-04 18:27:43.909151 | orchestrator | Friday 04 July 2025 18:25:42 +0000 (0:00:02.960) 0:00:50.132 *********** 2025-07-04 18:27:43.909163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-04 18:27:43.909175 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:43.909198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-04 18:27:43.909216 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:43.909228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-04 18:27:43.909240 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:43.909251 | orchestrator | 2025-07-04 18:27:43.909261 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-07-04 18:27:43.909272 | orchestrator | Friday 04 July 2025 18:25:46 +0000 (0:00:03.135) 0:00:53.268 *********** 2025-07-04 18:27:43.909283 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:43.909294 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:43.909305 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:43.909316 | orchestrator | 2025-07-04 18:27:43.909326 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-07-04 18:27:43.909343 | orchestrator | Friday 04 July 2025 18:25:49 +0000 (0:00:03.689) 0:00:56.957 *********** 2025-07-04 18:27:43.909367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:27:43.909381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:27:43.909393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:27:43.909412 | orchestrator | 2025-07-04 18:27:43.909423 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-07-04 18:27:43.909433 | orchestrator | Friday 04 July 2025 18:25:53 +0000 (0:00:04.057) 0:01:01.015 *********** 2025-07-04 18:27:43.909444 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:43.909455 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:27:43.909466 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:27:43.909476 | orchestrator | 2025-07-04 18:27:43.909491 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-07-04 18:27:43.909508 | orchestrator | Friday 04 July 2025 18:26:03 +0000 (0:00:09.368) 0:01:10.383 *********** 2025-07-04 18:27:43.909519 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:43.909530 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:43.909541 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:43.909551 | orchestrator | 2025-07-04 18:27:43.909562 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-07-04 18:27:43.909573 | orchestrator | Friday 04 July 2025 18:26:07 +0000 (0:00:04.018) 0:01:14.402 *********** 2025-07-04 18:27:43.909583 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:43.909594 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:43.909605 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:43.909615 | orchestrator | 2025-07-04 18:27:43.909626 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-07-04 18:27:43.909637 | orchestrator | Friday 04 July 2025 18:26:11 +0000 (0:00:03.978) 0:01:18.380 *********** 2025-07-04 18:27:43.909647 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:43.909658 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:43.909668 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:43.909679 | orchestrator | 2025-07-04 18:27:43.909690 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-07-04 18:27:43.909701 | orchestrator | Friday 04 July 2025 18:26:15 +0000 (0:00:04.433) 0:01:22.814 *********** 2025-07-04 18:27:43.909772 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:43.909792 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:43.909807 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:43.909818 | orchestrator | 2025-07-04 18:27:43.909829 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-07-04 18:27:43.909839 | orchestrator | Friday 04 July 2025 18:26:18 +0000 (0:00:03.101) 0:01:25.915 *********** 2025-07-04 18:27:43.909850 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:43.909860 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:43.909869 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:43.909879 | orchestrator | 2025-07-04 18:27:43.909888 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-07-04 18:27:43.909904 | orchestrator | Friday 04 July 2025 18:26:18 +0000 (0:00:00.309) 0:01:26.225 *********** 2025-07-04 18:27:43.909914 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-04 18:27:43.909924 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:43.909934 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-04 18:27:43.909943 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:43.909953 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-04 18:27:43.909963 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:43.909972 | orchestrator | 2025-07-04 18:27:43.909982 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-07-04 18:27:43.909991 | orchestrator | Friday 04 July 2025 18:26:22 +0000 (0:00:03.576) 0:01:29.802 *********** 2025-07-04 18:27:43.910002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:27:43.910072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:27:43.910105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-04 18:27:43.910124 | orchestrator | 2025-07-04 18:27:43.910140 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-04 18:27:43.910156 | orchestrator | Friday 04 July 2025 18:26:27 +0000 (0:00:05.134) 0:01:34.936 *********** 2025-07-04 18:27:43.910206 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:27:43.910227 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:27:43.910237 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:27:43.910246 | orchestrator | 2025-07-04 18:27:43.910256 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-07-04 18:27:43.910265 | orchestrator | Friday 04 July 2025 18:26:28 +0000 (0:00:00.315) 0:01:35.252 *********** 2025-07-04 18:27:43.910274 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:43.910284 | orchestrator | 2025-07-04 18:27:43.910293 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-07-04 18:27:43.910302 | orchestrator | Friday 04 July 2025 18:26:30 +0000 (0:00:02.215) 0:01:37.467 *********** 2025-07-04 18:27:43.910312 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:43.910321 | orchestrator | 2025-07-04 18:27:43.910330 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-07-04 18:27:43.910340 | orchestrator | Friday 04 July 2025 18:26:32 +0000 (0:00:02.134) 0:01:39.602 *********** 2025-07-04 18:27:43.910349 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:43.910359 | orchestrator | 2025-07-04 18:27:43.910372 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-07-04 18:27:43.910390 | orchestrator | Friday 04 July 2025 18:26:34 +0000 (0:00:02.187) 0:01:41.789 *********** 2025-07-04 18:27:43.910400 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:43.910410 | orchestrator | 2025-07-04 18:27:43.910419 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-07-04 18:27:43.910428 | orchestrator | Friday 04 July 2025 18:27:08 +0000 (0:00:33.513) 0:02:15.303 *********** 2025-07-04 18:27:43.910446 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:43.910455 | orchestrator | 2025-07-04 18:27:43.910465 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-04 18:27:43.910474 | orchestrator | Friday 04 July 2025 18:27:10 +0000 (0:00:02.470) 0:02:17.773 *********** 2025-07-04 18:27:43.910483 | orchestrator | 2025-07-04 18:27:43.910493 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-04 18:27:43.910502 | orchestrator | Friday 04 July 2025 18:27:10 +0000 (0:00:00.059) 0:02:17.833 *********** 2025-07-04 18:27:43.910512 | orchestrator | 2025-07-04 18:27:43.910521 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-04 18:27:43.910530 | orchestrator | Friday 04 July 2025 18:27:10 +0000 (0:00:00.059) 0:02:17.892 *********** 2025-07-04 18:27:43.910539 | orchestrator | 2025-07-04 18:27:43.910549 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-07-04 18:27:43.910558 | orchestrator | Friday 04 July 2025 18:27:10 +0000 (0:00:00.063) 0:02:17.956 *********** 2025-07-04 18:27:43.910567 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:27:43.910577 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:27:43.910586 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:27:43.910596 | orchestrator | 2025-07-04 18:27:43.910605 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:27:43.910615 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-04 18:27:43.910626 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-04 18:27:43.910635 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-04 18:27:43.910645 | orchestrator | 2025-07-04 18:27:43.910654 | orchestrator | 2025-07-04 18:27:43.910664 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:27:43.910673 | orchestrator | Friday 04 July 2025 18:27:42 +0000 (0:00:31.875) 0:02:49.832 *********** 2025-07-04 18:27:43.910682 | orchestrator | =============================================================================== 2025-07-04 18:27:43.910692 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 33.51s 2025-07-04 18:27:43.910702 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.88s 2025-07-04 18:27:43.910736 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.37s 2025-07-04 18:27:43.910746 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 6.58s 2025-07-04 18:27:43.910756 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.59s 2025-07-04 18:27:43.910765 | orchestrator | glance : Check glance containers ---------------------------------------- 5.13s 2025-07-04 18:27:43.910775 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.10s 2025-07-04 18:27:43.910784 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.43s 2025-07-04 18:27:43.910794 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.21s 2025-07-04 18:27:43.910803 | orchestrator | glance : Copying over config.json files for services -------------------- 4.06s 2025-07-04 18:27:43.910813 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.04s 2025-07-04 18:27:43.910822 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.02s 2025-07-04 18:27:43.910831 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.98s 2025-07-04 18:27:43.910841 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.88s 2025-07-04 18:27:43.910850 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.69s 2025-07-04 18:27:43.910860 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.58s 2025-07-04 18:27:43.910876 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.57s 2025-07-04 18:27:43.910886 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.14s 2025-07-04 18:27:43.910895 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.10s 2025-07-04 18:27:43.910905 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.04s 2025-07-04 18:27:43.910914 | orchestrator | 2025-07-04 18:27:43 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:43.910924 | orchestrator | 2025-07-04 18:27:43 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:46.949110 | orchestrator | 2025-07-04 18:27:46 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:27:46.950685 | orchestrator | 2025-07-04 18:27:46 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:27:46.952316 | orchestrator | 2025-07-04 18:27:46 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:46.953503 | orchestrator | 2025-07-04 18:27:46 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:46.953531 | orchestrator | 2025-07-04 18:27:46 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:49.997861 | orchestrator | 2025-07-04 18:27:49 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:27:50.000239 | orchestrator | 2025-07-04 18:27:49 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:27:50.006416 | orchestrator | 2025-07-04 18:27:50 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:50.011470 | orchestrator | 2025-07-04 18:27:50 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:50.011525 | orchestrator | 2025-07-04 18:27:50 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:53.051456 | orchestrator | 2025-07-04 18:27:53 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:27:53.052579 | orchestrator | 2025-07-04 18:27:53 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:27:53.054555 | orchestrator | 2025-07-04 18:27:53 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:53.055079 | orchestrator | 2025-07-04 18:27:53 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:53.055092 | orchestrator | 2025-07-04 18:27:53 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:56.106076 | orchestrator | 2025-07-04 18:27:56 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:27:56.108750 | orchestrator | 2025-07-04 18:27:56 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:27:56.110073 | orchestrator | 2025-07-04 18:27:56 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:56.112012 | orchestrator | 2025-07-04 18:27:56 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:56.112087 | orchestrator | 2025-07-04 18:27:56 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:27:59.162946 | orchestrator | 2025-07-04 18:27:59 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:27:59.164445 | orchestrator | 2025-07-04 18:27:59 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:27:59.168000 | orchestrator | 2025-07-04 18:27:59 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:27:59.169751 | orchestrator | 2025-07-04 18:27:59 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:27:59.169829 | orchestrator | 2025-07-04 18:27:59 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:02.205475 | orchestrator | 2025-07-04 18:28:02 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:02.207485 | orchestrator | 2025-07-04 18:28:02 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:02.210470 | orchestrator | 2025-07-04 18:28:02 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:28:02.213026 | orchestrator | 2025-07-04 18:28:02 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:02.213172 | orchestrator | 2025-07-04 18:28:02 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:05.262072 | orchestrator | 2025-07-04 18:28:05 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:05.262462 | orchestrator | 2025-07-04 18:28:05 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:05.263800 | orchestrator | 2025-07-04 18:28:05 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:28:05.264244 | orchestrator | 2025-07-04 18:28:05 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:05.264285 | orchestrator | 2025-07-04 18:28:05 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:08.308392 | orchestrator | 2025-07-04 18:28:08 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:08.310223 | orchestrator | 2025-07-04 18:28:08 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:08.312827 | orchestrator | 2025-07-04 18:28:08 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:28:08.316158 | orchestrator | 2025-07-04 18:28:08 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:08.316295 | orchestrator | 2025-07-04 18:28:08 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:11.366433 | orchestrator | 2025-07-04 18:28:11 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:11.368343 | orchestrator | 2025-07-04 18:28:11 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:11.369657 | orchestrator | 2025-07-04 18:28:11 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:28:11.370684 | orchestrator | 2025-07-04 18:28:11 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:11.370721 | orchestrator | 2025-07-04 18:28:11 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:14.407685 | orchestrator | 2025-07-04 18:28:14 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:14.408948 | orchestrator | 2025-07-04 18:28:14 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:14.410118 | orchestrator | 2025-07-04 18:28:14 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:28:14.411716 | orchestrator | 2025-07-04 18:28:14 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:14.411737 | orchestrator | 2025-07-04 18:28:14 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:17.465350 | orchestrator | 2025-07-04 18:28:17 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:17.467539 | orchestrator | 2025-07-04 18:28:17 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:17.469706 | orchestrator | 2025-07-04 18:28:17 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:28:17.471450 | orchestrator | 2025-07-04 18:28:17 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:17.471492 | orchestrator | 2025-07-04 18:28:17 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:20.529598 | orchestrator | 2025-07-04 18:28:20 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:20.532017 | orchestrator | 2025-07-04 18:28:20 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:20.534248 | orchestrator | 2025-07-04 18:28:20 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:28:20.536651 | orchestrator | 2025-07-04 18:28:20 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:20.536717 | orchestrator | 2025-07-04 18:28:20 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:23.574411 | orchestrator | 2025-07-04 18:28:23 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:23.575938 | orchestrator | 2025-07-04 18:28:23 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:23.577862 | orchestrator | 2025-07-04 18:28:23 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:28:23.580160 | orchestrator | 2025-07-04 18:28:23 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:23.580233 | orchestrator | 2025-07-04 18:28:23 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:26.626154 | orchestrator | 2025-07-04 18:28:26 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:26.627594 | orchestrator | 2025-07-04 18:28:26 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:26.630068 | orchestrator | 2025-07-04 18:28:26 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:28:26.631508 | orchestrator | 2025-07-04 18:28:26 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:26.631736 | orchestrator | 2025-07-04 18:28:26 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:29.682779 | orchestrator | 2025-07-04 18:28:29 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:29.682961 | orchestrator | 2025-07-04 18:28:29 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:29.684094 | orchestrator | 2025-07-04 18:28:29 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:28:29.684743 | orchestrator | 2025-07-04 18:28:29 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:29.684773 | orchestrator | 2025-07-04 18:28:29 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:32.721334 | orchestrator | 2025-07-04 18:28:32 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:32.722387 | orchestrator | 2025-07-04 18:28:32 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:32.723809 | orchestrator | 2025-07-04 18:28:32 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:28:32.725214 | orchestrator | 2025-07-04 18:28:32 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:32.725252 | orchestrator | 2025-07-04 18:28:32 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:35.781439 | orchestrator | 2025-07-04 18:28:35 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:35.783465 | orchestrator | 2025-07-04 18:28:35 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:35.786374 | orchestrator | 2025-07-04 18:28:35 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:28:35.788215 | orchestrator | 2025-07-04 18:28:35 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:35.788288 | orchestrator | 2025-07-04 18:28:35 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:38.842215 | orchestrator | 2025-07-04 18:28:38 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:38.842656 | orchestrator | 2025-07-04 18:28:38 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:38.845511 | orchestrator | 2025-07-04 18:28:38 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state STARTED 2025-07-04 18:28:38.847263 | orchestrator | 2025-07-04 18:28:38 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:38.847515 | orchestrator | 2025-07-04 18:28:38 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:41.888584 | orchestrator | 2025-07-04 18:28:41 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:41.890495 | orchestrator | 2025-07-04 18:28:41 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state STARTED 2025-07-04 18:28:41.893498 | orchestrator | 2025-07-04 18:28:41 | INFO  | Task 70d760e8-f478-4402-8018-580677b7228b is in state SUCCESS 2025-07-04 18:28:41.896086 | orchestrator | 2025-07-04 18:28:41.896139 | orchestrator | 2025-07-04 18:28:41.896154 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:28:41.896168 | orchestrator | 2025-07-04 18:28:41.896181 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:28:41.896196 | orchestrator | Friday 04 July 2025 18:25:23 +0000 (0:00:00.451) 0:00:00.451 *********** 2025-07-04 18:28:41.896209 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:28:41.896223 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:28:41.896236 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:28:41.896251 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:28:41.896260 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:28:41.896267 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:28:41.896276 | orchestrator | 2025-07-04 18:28:41.896284 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:28:41.896292 | orchestrator | Friday 04 July 2025 18:25:25 +0000 (0:00:01.837) 0:00:02.288 *********** 2025-07-04 18:28:41.896300 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-07-04 18:28:41.896309 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-07-04 18:28:41.896317 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-07-04 18:28:41.896325 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-07-04 18:28:41.896333 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-07-04 18:28:41.896340 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-07-04 18:28:41.896348 | orchestrator | 2025-07-04 18:28:41.896357 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-07-04 18:28:41.896372 | orchestrator | 2025-07-04 18:28:41.896381 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-04 18:28:41.896389 | orchestrator | Friday 04 July 2025 18:25:26 +0000 (0:00:00.821) 0:00:03.110 *********** 2025-07-04 18:28:41.896398 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:28:41.896407 | orchestrator | 2025-07-04 18:28:41.896442 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-07-04 18:28:41.896456 | orchestrator | Friday 04 July 2025 18:25:29 +0000 (0:00:02.986) 0:00:06.096 *********** 2025-07-04 18:28:41.896471 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-07-04 18:28:41.896484 | orchestrator | 2025-07-04 18:28:41.896497 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-07-04 18:28:41.896511 | orchestrator | Friday 04 July 2025 18:25:32 +0000 (0:00:03.667) 0:00:09.764 *********** 2025-07-04 18:28:41.896540 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-07-04 18:28:41.896556 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-07-04 18:28:41.896570 | orchestrator | 2025-07-04 18:28:41.896587 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-07-04 18:28:41.896602 | orchestrator | Friday 04 July 2025 18:25:39 +0000 (0:00:06.835) 0:00:16.599 *********** 2025-07-04 18:28:41.896615 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-04 18:28:41.896631 | orchestrator | 2025-07-04 18:28:41.896643 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-07-04 18:28:41.896657 | orchestrator | Friday 04 July 2025 18:25:42 +0000 (0:00:03.346) 0:00:19.946 *********** 2025-07-04 18:28:41.896670 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-04 18:28:41.896685 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-07-04 18:28:41.896699 | orchestrator | 2025-07-04 18:28:41.896713 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-07-04 18:28:41.896727 | orchestrator | Friday 04 July 2025 18:25:46 +0000 (0:00:04.020) 0:00:23.966 *********** 2025-07-04 18:28:41.896741 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-04 18:28:41.896755 | orchestrator | 2025-07-04 18:28:41.896769 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-07-04 18:28:41.896784 | orchestrator | Friday 04 July 2025 18:25:50 +0000 (0:00:03.647) 0:00:27.614 *********** 2025-07-04 18:28:41.896798 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-07-04 18:28:41.896812 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-07-04 18:28:41.896826 | orchestrator | 2025-07-04 18:28:41.896841 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-07-04 18:28:41.896856 | orchestrator | Friday 04 July 2025 18:25:59 +0000 (0:00:08.707) 0:00:36.321 *********** 2025-07-04 18:28:41.896922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.896943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.896974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.896999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.897017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.897032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.897059 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.897084 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.897104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.897119 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.897133 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.897501 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.897533 | orchestrator | 2025-07-04 18:28:41.897542 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-04 18:28:41.897551 | orchestrator | Friday 04 July 2025 18:26:02 +0000 (0:00:03.557) 0:00:39.878 *********** 2025-07-04 18:28:41.897559 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:28:41.897567 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:28:41.897575 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:28:41.897583 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:28:41.897591 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:28:41.897599 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:28:41.897607 | orchestrator | 2025-07-04 18:28:41.897614 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-04 18:28:41.897622 | orchestrator | Friday 04 July 2025 18:26:03 +0000 (0:00:00.500) 0:00:40.379 *********** 2025-07-04 18:28:41.897630 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:28:41.897638 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:28:41.897645 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:28:41.897653 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:28:41.897661 | orchestrator | 2025-07-04 18:28:41.897669 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-07-04 18:28:41.897677 | orchestrator | Friday 04 July 2025 18:26:04 +0000 (0:00:01.097) 0:00:41.476 *********** 2025-07-04 18:28:41.897685 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-07-04 18:28:41.897693 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-07-04 18:28:41.897701 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-07-04 18:28:41.897708 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-07-04 18:28:41.897716 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-07-04 18:28:41.897723 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-07-04 18:28:41.897731 | orchestrator | 2025-07-04 18:28:41.897739 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-07-04 18:28:41.897747 | orchestrator | Friday 04 July 2025 18:26:06 +0000 (0:00:02.546) 0:00:44.023 *********** 2025-07-04 18:28:41.897762 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-04 18:28:41.897773 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-04 18:28:41.897788 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-04 18:28:41.897803 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-04 18:28:41.897812 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-04 18:28:41.897823 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-04 18:28:41.897832 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-04 18:28:41.897852 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-04 18:28:41.897861 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-04 18:28:41.897874 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-04 18:28:41.897909 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-04 18:28:41.897918 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-04 18:28:41.897932 | orchestrator | 2025-07-04 18:28:41.897940 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-07-04 18:28:41.897948 | orchestrator | Friday 04 July 2025 18:26:10 +0000 (0:00:03.780) 0:00:47.804 *********** 2025-07-04 18:28:41.897956 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-04 18:28:41.897967 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-04 18:28:41.897979 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-04 18:28:41.897992 | orchestrator | 2025-07-04 18:28:41.898162 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-07-04 18:28:41.898519 | orchestrator | Friday 04 July 2025 18:26:12 +0000 (0:00:01.913) 0:00:49.717 *********** 2025-07-04 18:28:41.898556 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-07-04 18:28:41.898565 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-07-04 18:28:41.898573 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-07-04 18:28:41.898581 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-07-04 18:28:41.898589 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-07-04 18:28:41.898597 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-07-04 18:28:41.898605 | orchestrator | 2025-07-04 18:28:41.898613 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-07-04 18:28:41.898620 | orchestrator | Friday 04 July 2025 18:26:16 +0000 (0:00:03.359) 0:00:53.076 *********** 2025-07-04 18:28:41.898628 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-07-04 18:28:41.898637 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-07-04 18:28:41.898644 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-07-04 18:28:41.898652 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-07-04 18:28:41.898660 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-07-04 18:28:41.898668 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-07-04 18:28:41.898676 | orchestrator | 2025-07-04 18:28:41.898683 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-07-04 18:28:41.898691 | orchestrator | Friday 04 July 2025 18:26:17 +0000 (0:00:01.093) 0:00:54.170 *********** 2025-07-04 18:28:41.898699 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:28:41.898707 | orchestrator | 2025-07-04 18:28:41.898715 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-07-04 18:28:41.898722 | orchestrator | Friday 04 July 2025 18:26:17 +0000 (0:00:00.115) 0:00:54.286 *********** 2025-07-04 18:28:41.898730 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:28:41.898738 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:28:41.898746 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:28:41.898754 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:28:41.898761 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:28:41.898769 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:28:41.898777 | orchestrator | 2025-07-04 18:28:41.898785 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-04 18:28:41.898795 | orchestrator | Friday 04 July 2025 18:26:17 +0000 (0:00:00.673) 0:00:54.959 *********** 2025-07-04 18:28:41.898809 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:28:41.898825 | orchestrator | 2025-07-04 18:28:41.898844 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-07-04 18:28:41.898923 | orchestrator | Friday 04 July 2025 18:26:19 +0000 (0:00:01.169) 0:00:56.129 *********** 2025-07-04 18:28:41.898935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.898945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.898987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.898997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899010 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899085 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899104 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899114 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899124 | orchestrator | 2025-07-04 18:28:41.899133 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-07-04 18:28:41.899142 | orchestrator | Friday 04 July 2025 18:26:22 +0000 (0:00:03.035) 0:00:59.164 *********** 2025-07-04 18:28:41.899157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-04 18:28:41.899167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899176 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:28:41.899186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-04 18:28:41.899205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899214 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:28:41.899224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-04 18:28:41.899234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899243 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:28:41.899259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899284 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:28:41.899298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899316 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:28:41.899326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899351 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:28:41.899360 | orchestrator | 2025-07-04 18:28:41.899369 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-07-04 18:28:41.899378 | orchestrator | Friday 04 July 2025 18:26:23 +0000 (0:00:01.391) 0:01:00.555 *********** 2025-07-04 18:28:41.899387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-04 18:28:41.899408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899418 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:28:41.899426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-04 18:28:41.899436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899445 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:28:41.899459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899484 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:28:41.899498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-04 18:28:41.899508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899516 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:28:41.899524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899545 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:28:41.899559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.899579 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:28:41.899587 | orchestrator | 2025-07-04 18:28:41.899594 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-07-04 18:28:41.899602 | orchestrator | Friday 04 July 2025 18:26:26 +0000 (0:00:03.240) 0:01:03.796 *********** 2025-07-04 18:28:41.899610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.899619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.899632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.899649 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899669 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899725 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899733 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899742 | orchestrator | 2025-07-04 18:28:41.899750 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-07-04 18:28:41.899758 | orchestrator | Friday 04 July 2025 18:26:29 +0000 (0:00:02.822) 0:01:06.619 *********** 2025-07-04 18:28:41.899765 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-04 18:28:41.899773 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:28:41.899781 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-04 18:28:41.899789 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:28:41.899797 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-04 18:28:41.899810 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:28:41.899818 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-04 18:28:41.899825 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-04 18:28:41.899837 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-04 18:28:41.899845 | orchestrator | 2025-07-04 18:28:41.899853 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-07-04 18:28:41.899861 | orchestrator | Friday 04 July 2025 18:26:31 +0000 (0:00:01.880) 0:01:08.499 *********** 2025-07-04 18:28:41.899869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.899902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.899912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.899921 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899950 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.899993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.900005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.900014 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.900022 | orchestrator | 2025-07-04 18:28:41.900030 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-07-04 18:28:41.900038 | orchestrator | Friday 04 July 2025 18:26:43 +0000 (0:00:11.595) 0:01:20.095 *********** 2025-07-04 18:28:41.900046 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:28:41.900054 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:28:41.900062 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:28:41.900070 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:28:41.900077 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:28:41.900085 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:28:41.900093 | orchestrator | 2025-07-04 18:28:41.900101 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-07-04 18:28:41.900109 | orchestrator | Friday 04 July 2025 18:26:46 +0000 (0:00:03.186) 0:01:23.281 *********** 2025-07-04 18:28:41.900121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-04 18:28:41.900129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.900147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-04 18:28:41.900155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.900164 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:28:41.900172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-04 18:28:41.900184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.900192 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:28:41.900200 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:28:41.900208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.900222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.900230 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:28:41.900242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.900251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.900259 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:28:41.900271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.900288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-04 18:28:41.900296 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:28:41.900304 | orchestrator | 2025-07-04 18:28:41.900312 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-07-04 18:28:41.900320 | orchestrator | Friday 04 July 2025 18:26:47 +0000 (0:00:01.068) 0:01:24.349 *********** 2025-07-04 18:28:41.900328 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:28:41.900335 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:28:41.900343 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:28:41.900351 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:28:41.900359 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:28:41.900366 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:28:41.900374 | orchestrator | 2025-07-04 18:28:41.900382 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-07-04 18:28:41.900390 | orchestrator | Friday 04 July 2025 18:26:48 +0000 (0:00:00.738) 0:01:25.088 *********** 2025-07-04 18:28:41.900403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.900412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.900427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-04 18:28:41.900441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.900450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.900463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.900471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.900483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.900497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.900505 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.900517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.900526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:28:41.900534 | orchestrator | 2025-07-04 18:28:41.900542 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-04 18:28:41.900550 | orchestrator | Friday 04 July 2025 18:26:50 +0000 (0:00:02.275) 0:01:27.364 *********** 2025-07-04 18:28:41.900558 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:28:41.900566 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:28:41.900574 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:28:41.900581 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:28:41.900589 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:28:41.900597 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:28:41.900604 | orchestrator | 2025-07-04 18:28:41.900612 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-07-04 18:28:41.900620 | orchestrator | Friday 04 July 2025 18:26:51 +0000 (0:00:00.706) 0:01:28.070 *********** 2025-07-04 18:28:41.900633 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:28:41.900641 | orchestrator | 2025-07-04 18:28:41.900649 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-07-04 18:28:41.900656 | orchestrator | Friday 04 July 2025 18:26:53 +0000 (0:00:02.186) 0:01:30.257 *********** 2025-07-04 18:28:41.900664 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:28:41.900672 | orchestrator | 2025-07-04 18:28:41.900680 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-07-04 18:28:41.900687 | orchestrator | Friday 04 July 2025 18:26:55 +0000 (0:00:02.354) 0:01:32.611 *********** 2025-07-04 18:28:41.900695 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:28:41.900703 | orchestrator | 2025-07-04 18:28:41.900711 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-04 18:28:41.900719 | orchestrator | Friday 04 July 2025 18:27:16 +0000 (0:00:20.667) 0:01:53.279 *********** 2025-07-04 18:28:41.900726 | orchestrator | 2025-07-04 18:28:41.900738 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-04 18:28:41.900746 | orchestrator | Friday 04 July 2025 18:27:16 +0000 (0:00:00.145) 0:01:53.424 *********** 2025-07-04 18:28:41.900754 | orchestrator | 2025-07-04 18:28:41.900761 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-04 18:28:41.900769 | orchestrator | Friday 04 July 2025 18:27:16 +0000 (0:00:00.180) 0:01:53.604 *********** 2025-07-04 18:28:41.900777 | orchestrator | 2025-07-04 18:28:41.900785 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-04 18:28:41.900792 | orchestrator | Friday 04 July 2025 18:27:16 +0000 (0:00:00.159) 0:01:53.763 *********** 2025-07-04 18:28:41.900800 | orchestrator | 2025-07-04 18:28:41.900808 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-04 18:28:41.900815 | orchestrator | Friday 04 July 2025 18:27:16 +0000 (0:00:00.141) 0:01:53.905 *********** 2025-07-04 18:28:41.900823 | orchestrator | 2025-07-04 18:28:41.900831 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-04 18:28:41.900839 | orchestrator | Friday 04 July 2025 18:27:17 +0000 (0:00:00.132) 0:01:54.038 *********** 2025-07-04 18:28:41.900846 | orchestrator | 2025-07-04 18:28:41.900854 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-07-04 18:28:41.900862 | orchestrator | Friday 04 July 2025 18:27:17 +0000 (0:00:00.132) 0:01:54.170 *********** 2025-07-04 18:28:41.900869 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:28:41.900898 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:28:41.900907 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:28:41.900915 | orchestrator | 2025-07-04 18:28:41.900922 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-07-04 18:28:41.900930 | orchestrator | Friday 04 July 2025 18:27:43 +0000 (0:00:26.787) 0:02:20.958 *********** 2025-07-04 18:28:41.900938 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:28:41.900946 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:28:41.900954 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:28:41.900961 | orchestrator | 2025-07-04 18:28:41.900969 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-07-04 18:28:41.900977 | orchestrator | Friday 04 July 2025 18:27:51 +0000 (0:00:08.022) 0:02:28.980 *********** 2025-07-04 18:28:41.900984 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:28:41.900992 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:28:41.901018 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:28:41.901027 | orchestrator | 2025-07-04 18:28:41.901034 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-07-04 18:28:41.901043 | orchestrator | Friday 04 July 2025 18:28:27 +0000 (0:00:35.761) 0:03:04.742 *********** 2025-07-04 18:28:41.901050 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:28:41.901058 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:28:41.901066 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:28:41.901074 | orchestrator | 2025-07-04 18:28:41.901081 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-07-04 18:28:41.901095 | orchestrator | Friday 04 July 2025 18:28:37 +0000 (0:00:10.140) 0:03:14.882 *********** 2025-07-04 18:28:41.901103 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:28:41.901111 | orchestrator | 2025-07-04 18:28:41.901118 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:28:41.901132 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-04 18:28:41.901147 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-04 18:28:41.901161 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-04 18:28:41.901175 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-04 18:28:41.901187 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-04 18:28:41.901196 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-04 18:28:41.901203 | orchestrator | 2025-07-04 18:28:41.901294 | orchestrator | 2025-07-04 18:28:41.901305 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:28:41.901313 | orchestrator | Friday 04 July 2025 18:28:38 +0000 (0:00:00.662) 0:03:15.545 *********** 2025-07-04 18:28:41.901321 | orchestrator | =============================================================================== 2025-07-04 18:28:41.901329 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 35.76s 2025-07-04 18:28:41.901337 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 26.79s 2025-07-04 18:28:41.901345 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.67s 2025-07-04 18:28:41.901353 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.60s 2025-07-04 18:28:41.901360 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.14s 2025-07-04 18:28:41.901368 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.71s 2025-07-04 18:28:41.901376 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.02s 2025-07-04 18:28:41.901383 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.84s 2025-07-04 18:28:41.901391 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.02s 2025-07-04 18:28:41.901404 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.78s 2025-07-04 18:28:41.901412 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.67s 2025-07-04 18:28:41.901420 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.65s 2025-07-04 18:28:41.901428 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.56s 2025-07-04 18:28:41.901435 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.36s 2025-07-04 18:28:41.901443 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.35s 2025-07-04 18:28:41.901451 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 3.24s 2025-07-04 18:28:41.901458 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.19s 2025-07-04 18:28:41.901466 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.04s 2025-07-04 18:28:41.901474 | orchestrator | cinder : include_tasks -------------------------------------------------- 2.99s 2025-07-04 18:28:41.901482 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.82s 2025-07-04 18:28:41.901489 | orchestrator | 2025-07-04 18:28:41 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:28:41.901508 | orchestrator | 2025-07-04 18:28:41 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:41.901516 | orchestrator | 2025-07-04 18:28:41 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:44.946435 | orchestrator | 2025-07-04 18:28:44 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:44.948412 | orchestrator | 2025-07-04 18:28:44.948503 | orchestrator | 2025-07-04 18:28:44.948527 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:28:44.948549 | orchestrator | 2025-07-04 18:28:44.948569 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:28:44.948587 | orchestrator | Friday 04 July 2025 18:27:46 +0000 (0:00:00.247) 0:00:00.247 *********** 2025-07-04 18:28:44.948606 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:28:44.948625 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:28:44.948643 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:28:44.948662 | orchestrator | 2025-07-04 18:28:44.948681 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:28:44.948701 | orchestrator | Friday 04 July 2025 18:27:47 +0000 (0:00:00.276) 0:00:00.523 *********** 2025-07-04 18:28:44.948746 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-07-04 18:28:44.948767 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-07-04 18:28:44.948787 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-07-04 18:28:44.948806 | orchestrator | 2025-07-04 18:28:44.948825 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-07-04 18:28:44.948844 | orchestrator | 2025-07-04 18:28:44.948864 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-04 18:28:44.948884 | orchestrator | Friday 04 July 2025 18:27:47 +0000 (0:00:00.342) 0:00:00.866 *********** 2025-07-04 18:28:44.948979 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:28:44.949002 | orchestrator | 2025-07-04 18:28:44.949023 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-07-04 18:28:44.949041 | orchestrator | Friday 04 July 2025 18:27:47 +0000 (0:00:00.505) 0:00:01.372 *********** 2025-07-04 18:28:44.949061 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-07-04 18:28:44.949079 | orchestrator | 2025-07-04 18:28:44.949098 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-07-04 18:28:44.949117 | orchestrator | Friday 04 July 2025 18:27:51 +0000 (0:00:03.483) 0:00:04.855 *********** 2025-07-04 18:28:44.949136 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-07-04 18:28:44.949157 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-07-04 18:28:44.949177 | orchestrator | 2025-07-04 18:28:44.949195 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-07-04 18:28:44.949240 | orchestrator | Friday 04 July 2025 18:27:58 +0000 (0:00:06.951) 0:00:11.807 *********** 2025-07-04 18:28:44.949260 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-04 18:28:44.949279 | orchestrator | 2025-07-04 18:28:44.949297 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-07-04 18:28:44.949318 | orchestrator | Friday 04 July 2025 18:28:01 +0000 (0:00:03.202) 0:00:15.010 *********** 2025-07-04 18:28:44.949337 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-04 18:28:44.949356 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-04 18:28:44.949377 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-04 18:28:44.949395 | orchestrator | 2025-07-04 18:28:44.949437 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-07-04 18:28:44.949457 | orchestrator | Friday 04 July 2025 18:28:10 +0000 (0:00:08.518) 0:00:23.528 *********** 2025-07-04 18:28:44.949737 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-04 18:28:44.949766 | orchestrator | 2025-07-04 18:28:44.949788 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-07-04 18:28:44.949834 | orchestrator | Friday 04 July 2025 18:28:13 +0000 (0:00:03.547) 0:00:27.075 *********** 2025-07-04 18:28:44.949853 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-04 18:28:44.949873 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-04 18:28:44.949917 | orchestrator | 2025-07-04 18:28:44.949995 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-07-04 18:28:44.950109 | orchestrator | Friday 04 July 2025 18:28:21 +0000 (0:00:07.948) 0:00:35.024 *********** 2025-07-04 18:28:44.950132 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-07-04 18:28:44.950143 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-07-04 18:28:44.950159 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-07-04 18:28:44.950177 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-07-04 18:28:44.950197 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-07-04 18:28:44.950216 | orchestrator | 2025-07-04 18:28:44.950259 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-04 18:28:44.950278 | orchestrator | Friday 04 July 2025 18:28:38 +0000 (0:00:17.326) 0:00:52.351 *********** 2025-07-04 18:28:44.950297 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:28:44.950316 | orchestrator | 2025-07-04 18:28:44.950336 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-07-04 18:28:44.950355 | orchestrator | Friday 04 July 2025 18:28:39 +0000 (0:00:00.543) 0:00:52.894 *********** 2025-07-04 18:28:44.950377 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-07-04 18:28:44.950456 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1751653721.0889297-6485-148639010396980/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1751653721.0889297-6485-148639010396980/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1751653721.0889297-6485-148639010396980/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_nova_flavor_payload_gn4e7iye/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_gn4e7iye/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_gn4e7iye/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_gn4e7iye/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-07-04 18:28:44.950506 | orchestrator | 2025-07-04 18:28:44.950565 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:28:44.950588 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-04 18:28:44.950609 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:28:44.950630 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:28:44.950651 | orchestrator | 2025-07-04 18:28:44.950670 | orchestrator | 2025-07-04 18:28:44.950690 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:28:44.950710 | orchestrator | Friday 04 July 2025 18:28:43 +0000 (0:00:03.728) 0:00:56.623 *********** 2025-07-04 18:28:44.950740 | orchestrator | =============================================================================== 2025-07-04 18:28:44.950760 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.33s 2025-07-04 18:28:44.950779 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.52s 2025-07-04 18:28:44.950797 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.95s 2025-07-04 18:28:44.950808 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.95s 2025-07-04 18:28:44.950819 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.73s 2025-07-04 18:28:44.950829 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.55s 2025-07-04 18:28:44.950861 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.48s 2025-07-04 18:28:44.950872 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.20s 2025-07-04 18:28:44.950883 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.54s 2025-07-04 18:28:44.950981 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.51s 2025-07-04 18:28:44.950994 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2025-07-04 18:28:44.951005 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-07-04 18:28:44.951017 | orchestrator | 2025-07-04 18:28:44 | INFO  | Task 7b5f0385-9d53-4371-b7d1-b7d87f496988 is in state SUCCESS 2025-07-04 18:28:44.951038 | orchestrator | 2025-07-04 18:28:44 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:28:44.954003 | orchestrator | 2025-07-04 18:28:44 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:44.954114 | orchestrator | 2025-07-04 18:28:44 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:48.014272 | orchestrator | 2025-07-04 18:28:48 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:48.014404 | orchestrator | 2025-07-04 18:28:48 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:28:48.014676 | orchestrator | 2025-07-04 18:28:48 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:48.014703 | orchestrator | 2025-07-04 18:28:48 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:51.064107 | orchestrator | 2025-07-04 18:28:51 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:51.065805 | orchestrator | 2025-07-04 18:28:51 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:28:51.068306 | orchestrator | 2025-07-04 18:28:51 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:51.068385 | orchestrator | 2025-07-04 18:28:51 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:54.106340 | orchestrator | 2025-07-04 18:28:54 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:54.109660 | orchestrator | 2025-07-04 18:28:54 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:28:54.112262 | orchestrator | 2025-07-04 18:28:54 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:54.112342 | orchestrator | 2025-07-04 18:28:54 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:28:57.163520 | orchestrator | 2025-07-04 18:28:57 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:28:57.165606 | orchestrator | 2025-07-04 18:28:57 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:28:57.168531 | orchestrator | 2025-07-04 18:28:57 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:28:57.168603 | orchestrator | 2025-07-04 18:28:57 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:00.218070 | orchestrator | 2025-07-04 18:29:00 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:00.219249 | orchestrator | 2025-07-04 18:29:00 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:00.219844 | orchestrator | 2025-07-04 18:29:00 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:00.219882 | orchestrator | 2025-07-04 18:29:00 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:03.261760 | orchestrator | 2025-07-04 18:29:03 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:03.264932 | orchestrator | 2025-07-04 18:29:03 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:03.266760 | orchestrator | 2025-07-04 18:29:03 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:03.267176 | orchestrator | 2025-07-04 18:29:03 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:06.317168 | orchestrator | 2025-07-04 18:29:06 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:06.319306 | orchestrator | 2025-07-04 18:29:06 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:06.320676 | orchestrator | 2025-07-04 18:29:06 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:06.320703 | orchestrator | 2025-07-04 18:29:06 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:09.369314 | orchestrator | 2025-07-04 18:29:09 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:09.371256 | orchestrator | 2025-07-04 18:29:09 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:09.373163 | orchestrator | 2025-07-04 18:29:09 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:09.373206 | orchestrator | 2025-07-04 18:29:09 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:12.427319 | orchestrator | 2025-07-04 18:29:12 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:12.428069 | orchestrator | 2025-07-04 18:29:12 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:12.428981 | orchestrator | 2025-07-04 18:29:12 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:12.429045 | orchestrator | 2025-07-04 18:29:12 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:15.470403 | orchestrator | 2025-07-04 18:29:15 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:15.472009 | orchestrator | 2025-07-04 18:29:15 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:15.473838 | orchestrator | 2025-07-04 18:29:15 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:15.473885 | orchestrator | 2025-07-04 18:29:15 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:18.516624 | orchestrator | 2025-07-04 18:29:18 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:18.517477 | orchestrator | 2025-07-04 18:29:18 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:18.518864 | orchestrator | 2025-07-04 18:29:18 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:18.518910 | orchestrator | 2025-07-04 18:29:18 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:21.566955 | orchestrator | 2025-07-04 18:29:21 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:21.569776 | orchestrator | 2025-07-04 18:29:21 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:21.570942 | orchestrator | 2025-07-04 18:29:21 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:21.570976 | orchestrator | 2025-07-04 18:29:21 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:24.605680 | orchestrator | 2025-07-04 18:29:24 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:24.608154 | orchestrator | 2025-07-04 18:29:24 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:24.608200 | orchestrator | 2025-07-04 18:29:24 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:24.608222 | orchestrator | 2025-07-04 18:29:24 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:27.650510 | orchestrator | 2025-07-04 18:29:27 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:27.653046 | orchestrator | 2025-07-04 18:29:27 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:27.654556 | orchestrator | 2025-07-04 18:29:27 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:27.654985 | orchestrator | 2025-07-04 18:29:27 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:30.704718 | orchestrator | 2025-07-04 18:29:30 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:30.706431 | orchestrator | 2025-07-04 18:29:30 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:30.708601 | orchestrator | 2025-07-04 18:29:30 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:30.708659 | orchestrator | 2025-07-04 18:29:30 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:33.748077 | orchestrator | 2025-07-04 18:29:33 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:33.750179 | orchestrator | 2025-07-04 18:29:33 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:33.752048 | orchestrator | 2025-07-04 18:29:33 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:33.752295 | orchestrator | 2025-07-04 18:29:33 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:36.803501 | orchestrator | 2025-07-04 18:29:36 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:36.803585 | orchestrator | 2025-07-04 18:29:36 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:36.805532 | orchestrator | 2025-07-04 18:29:36 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:36.805798 | orchestrator | 2025-07-04 18:29:36 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:39.860396 | orchestrator | 2025-07-04 18:29:39 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:39.862904 | orchestrator | 2025-07-04 18:29:39 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:39.865055 | orchestrator | 2025-07-04 18:29:39 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:39.865957 | orchestrator | 2025-07-04 18:29:39 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:42.906090 | orchestrator | 2025-07-04 18:29:42 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:42.907567 | orchestrator | 2025-07-04 18:29:42 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:42.909606 | orchestrator | 2025-07-04 18:29:42 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:42.909649 | orchestrator | 2025-07-04 18:29:42 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:45.958238 | orchestrator | 2025-07-04 18:29:45 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:45.959987 | orchestrator | 2025-07-04 18:29:45 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:45.962438 | orchestrator | 2025-07-04 18:29:45 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:45.962934 | orchestrator | 2025-07-04 18:29:45 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:49.014496 | orchestrator | 2025-07-04 18:29:49 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:49.016254 | orchestrator | 2025-07-04 18:29:49 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:49.018205 | orchestrator | 2025-07-04 18:29:49 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:49.018240 | orchestrator | 2025-07-04 18:29:49 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:52.055889 | orchestrator | 2025-07-04 18:29:52 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:52.056539 | orchestrator | 2025-07-04 18:29:52 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:52.057686 | orchestrator | 2025-07-04 18:29:52 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:52.057726 | orchestrator | 2025-07-04 18:29:52 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:55.095385 | orchestrator | 2025-07-04 18:29:55 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:55.098710 | orchestrator | 2025-07-04 18:29:55 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:55.098767 | orchestrator | 2025-07-04 18:29:55 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:55.098780 | orchestrator | 2025-07-04 18:29:55 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:29:58.144429 | orchestrator | 2025-07-04 18:29:58 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:29:58.144506 | orchestrator | 2025-07-04 18:29:58 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:29:58.144683 | orchestrator | 2025-07-04 18:29:58 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:29:58.144923 | orchestrator | 2025-07-04 18:29:58 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:01.179917 | orchestrator | 2025-07-04 18:30:01 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:01.181285 | orchestrator | 2025-07-04 18:30:01 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:01.183140 | orchestrator | 2025-07-04 18:30:01 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:01.183167 | orchestrator | 2025-07-04 18:30:01 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:04.224772 | orchestrator | 2025-07-04 18:30:04 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:04.229097 | orchestrator | 2025-07-04 18:30:04 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:04.234670 | orchestrator | 2025-07-04 18:30:04 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:04.234737 | orchestrator | 2025-07-04 18:30:04 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:07.296568 | orchestrator | 2025-07-04 18:30:07 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:07.298925 | orchestrator | 2025-07-04 18:30:07 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:07.302913 | orchestrator | 2025-07-04 18:30:07 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:07.302991 | orchestrator | 2025-07-04 18:30:07 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:10.351682 | orchestrator | 2025-07-04 18:30:10 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:10.351787 | orchestrator | 2025-07-04 18:30:10 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:10.352477 | orchestrator | 2025-07-04 18:30:10 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:10.352539 | orchestrator | 2025-07-04 18:30:10 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:13.405119 | orchestrator | 2025-07-04 18:30:13 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:13.406363 | orchestrator | 2025-07-04 18:30:13 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:13.410282 | orchestrator | 2025-07-04 18:30:13 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:13.410714 | orchestrator | 2025-07-04 18:30:13 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:16.459906 | orchestrator | 2025-07-04 18:30:16 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:16.460313 | orchestrator | 2025-07-04 18:30:16 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:16.463044 | orchestrator | 2025-07-04 18:30:16 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:16.463105 | orchestrator | 2025-07-04 18:30:16 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:19.501695 | orchestrator | 2025-07-04 18:30:19 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:19.504347 | orchestrator | 2025-07-04 18:30:19 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:19.506756 | orchestrator | 2025-07-04 18:30:19 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:19.507008 | orchestrator | 2025-07-04 18:30:19 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:22.563623 | orchestrator | 2025-07-04 18:30:22 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:22.566417 | orchestrator | 2025-07-04 18:30:22 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:22.568378 | orchestrator | 2025-07-04 18:30:22 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:22.568426 | orchestrator | 2025-07-04 18:30:22 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:25.612465 | orchestrator | 2025-07-04 18:30:25 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:25.614174 | orchestrator | 2025-07-04 18:30:25 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:25.616365 | orchestrator | 2025-07-04 18:30:25 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:25.616396 | orchestrator | 2025-07-04 18:30:25 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:28.661785 | orchestrator | 2025-07-04 18:30:28 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:28.662273 | orchestrator | 2025-07-04 18:30:28 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:28.666106 | orchestrator | 2025-07-04 18:30:28 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:28.666171 | orchestrator | 2025-07-04 18:30:28 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:31.700525 | orchestrator | 2025-07-04 18:30:31 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:31.701895 | orchestrator | 2025-07-04 18:30:31 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:31.703851 | orchestrator | 2025-07-04 18:30:31 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:31.704020 | orchestrator | 2025-07-04 18:30:31 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:34.749973 | orchestrator | 2025-07-04 18:30:34 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:34.751857 | orchestrator | 2025-07-04 18:30:34 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:34.754710 | orchestrator | 2025-07-04 18:30:34 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:34.755255 | orchestrator | 2025-07-04 18:30:34 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:37.806317 | orchestrator | 2025-07-04 18:30:37 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:37.806465 | orchestrator | 2025-07-04 18:30:37 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:37.807465 | orchestrator | 2025-07-04 18:30:37 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:37.807576 | orchestrator | 2025-07-04 18:30:37 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:40.852437 | orchestrator | 2025-07-04 18:30:40 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:40.853582 | orchestrator | 2025-07-04 18:30:40 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:40.855718 | orchestrator | 2025-07-04 18:30:40 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:40.855751 | orchestrator | 2025-07-04 18:30:40 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:43.908546 | orchestrator | 2025-07-04 18:30:43 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:43.910509 | orchestrator | 2025-07-04 18:30:43 | INFO  | Task b9ad76ca-b7c4-4a01-8c3d-ef239953589f is in state STARTED 2025-07-04 18:30:43.911180 | orchestrator | 2025-07-04 18:30:43 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:43.912956 | orchestrator | 2025-07-04 18:30:43 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:43.913008 | orchestrator | 2025-07-04 18:30:43 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:46.970891 | orchestrator | 2025-07-04 18:30:46 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state STARTED 2025-07-04 18:30:46.973579 | orchestrator | 2025-07-04 18:30:46 | INFO  | Task b9ad76ca-b7c4-4a01-8c3d-ef239953589f is in state STARTED 2025-07-04 18:30:46.975851 | orchestrator | 2025-07-04 18:30:46 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:46.977868 | orchestrator | 2025-07-04 18:30:46 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:46.978106 | orchestrator | 2025-07-04 18:30:46 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:50.036672 | orchestrator | 2025-07-04 18:30:50 | INFO  | Task f8543fda-3aef-4d02-9540-c3834c9c76cf is in state SUCCESS 2025-07-04 18:30:50.036776 | orchestrator | 2025-07-04 18:30:50 | INFO  | Task b9ad76ca-b7c4-4a01-8c3d-ef239953589f is in state STARTED 2025-07-04 18:30:50.036787 | orchestrator | 2025-07-04 18:30:50 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:50.041327 | orchestrator | 2025-07-04 18:30:50 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:50.041357 | orchestrator | 2025-07-04 18:30:50 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:53.089335 | orchestrator | 2025-07-04 18:30:53 | INFO  | Task b9ad76ca-b7c4-4a01-8c3d-ef239953589f is in state STARTED 2025-07-04 18:30:53.091009 | orchestrator | 2025-07-04 18:30:53 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:53.092840 | orchestrator | 2025-07-04 18:30:53 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:53.092979 | orchestrator | 2025-07-04 18:30:53 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:56.144159 | orchestrator | 2025-07-04 18:30:56 | INFO  | Task b9ad76ca-b7c4-4a01-8c3d-ef239953589f is in state STARTED 2025-07-04 18:30:56.144269 | orchestrator | 2025-07-04 18:30:56 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:56.145065 | orchestrator | 2025-07-04 18:30:56 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:56.145094 | orchestrator | 2025-07-04 18:30:56 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:30:59.200901 | orchestrator | 2025-07-04 18:30:59 | INFO  | Task b9ad76ca-b7c4-4a01-8c3d-ef239953589f is in state STARTED 2025-07-04 18:30:59.201558 | orchestrator | 2025-07-04 18:30:59 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:30:59.207915 | orchestrator | 2025-07-04 18:30:59 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:30:59.207973 | orchestrator | 2025-07-04 18:30:59 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:02.267076 | orchestrator | 2025-07-04 18:31:02 | INFO  | Task b9ad76ca-b7c4-4a01-8c3d-ef239953589f is in state SUCCESS 2025-07-04 18:31:02.269262 | orchestrator | 2025-07-04 18:31:02 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state STARTED 2025-07-04 18:31:02.271509 | orchestrator | 2025-07-04 18:31:02 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:02.271553 | orchestrator | 2025-07-04 18:31:02 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:05.333418 | orchestrator | 2025-07-04 18:31:05.334188 | orchestrator | 2025-07-04 18:31:05.334209 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:31:05.334224 | orchestrator | 2025-07-04 18:31:05.334238 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:31:05.334251 | orchestrator | Friday 04 July 2025 18:27:32 +0000 (0:00:00.166) 0:00:00.166 *********** 2025-07-04 18:31:05.334264 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:31:05.334278 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:31:05.334290 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:31:05.334303 | orchestrator | 2025-07-04 18:31:05.334317 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:31:05.334329 | orchestrator | Friday 04 July 2025 18:27:32 +0000 (0:00:00.260) 0:00:00.427 *********** 2025-07-04 18:31:05.334340 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-07-04 18:31:05.334353 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-07-04 18:31:05.334364 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-07-04 18:31:05.334375 | orchestrator | 2025-07-04 18:31:05.334386 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-07-04 18:31:05.334398 | orchestrator | 2025-07-04 18:31:05.334696 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-07-04 18:31:05.334727 | orchestrator | Friday 04 July 2025 18:27:33 +0000 (0:00:00.581) 0:00:01.008 *********** 2025-07-04 18:31:05.334745 | orchestrator | 2025-07-04 18:31:05.334781 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-07-04 18:31:05.334799 | orchestrator | 2025-07-04 18:31:05.334817 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-07-04 18:31:05.334834 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:31:05.334853 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:31:05.334871 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:31:05.334921 | orchestrator | 2025-07-04 18:31:05.334940 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:31:05.334960 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:31:05.334976 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:31:05.334988 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:31:05.334999 | orchestrator | 2025-07-04 18:31:05.335010 | orchestrator | 2025-07-04 18:31:05.335021 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:31:05.335032 | orchestrator | Friday 04 July 2025 18:30:48 +0000 (0:03:14.800) 0:03:15.810 *********** 2025-07-04 18:31:05.335043 | orchestrator | =============================================================================== 2025-07-04 18:31:05.335054 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 194.80s 2025-07-04 18:31:05.335064 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-07-04 18:31:05.335075 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-07-04 18:31:05.335086 | orchestrator | 2025-07-04 18:31:05.335097 | orchestrator | None 2025-07-04 18:31:05.335108 | orchestrator | 2025-07-04 18:31:05.335119 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:31:05.335130 | orchestrator | 2025-07-04 18:31:05.335141 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:31:05.335151 | orchestrator | Friday 04 July 2025 18:28:43 +0000 (0:00:00.254) 0:00:00.254 *********** 2025-07-04 18:31:05.335162 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:31:05.335173 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:31:05.335184 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:31:05.335195 | orchestrator | 2025-07-04 18:31:05.335206 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:31:05.335217 | orchestrator | Friday 04 July 2025 18:28:43 +0000 (0:00:00.304) 0:00:00.559 *********** 2025-07-04 18:31:05.335227 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-07-04 18:31:05.335240 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-07-04 18:31:05.335250 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-07-04 18:31:05.335261 | orchestrator | 2025-07-04 18:31:05.335272 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-07-04 18:31:05.335283 | orchestrator | 2025-07-04 18:31:05.335293 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-04 18:31:05.335304 | orchestrator | Friday 04 July 2025 18:28:43 +0000 (0:00:00.413) 0:00:00.972 *********** 2025-07-04 18:31:05.335316 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:31:05.335327 | orchestrator | 2025-07-04 18:31:05.335338 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-07-04 18:31:05.335349 | orchestrator | Friday 04 July 2025 18:28:44 +0000 (0:00:00.590) 0:00:01.562 *********** 2025-07-04 18:31:05.335364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.335403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.335461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.335474 | orchestrator | 2025-07-04 18:31:05.335485 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-07-04 18:31:05.335496 | orchestrator | Friday 04 July 2025 18:28:45 +0000 (0:00:00.796) 0:00:02.359 *********** 2025-07-04 18:31:05.335507 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-07-04 18:31:05.335519 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-07-04 18:31:05.335530 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 18:31:05.335541 | orchestrator | 2025-07-04 18:31:05.335551 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-04 18:31:05.335562 | orchestrator | Friday 04 July 2025 18:28:46 +0000 (0:00:01.127) 0:00:03.486 *********** 2025-07-04 18:31:05.335573 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:31:05.335584 | orchestrator | 2025-07-04 18:31:05.335595 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-07-04 18:31:05.335605 | orchestrator | Friday 04 July 2025 18:28:47 +0000 (0:00:00.707) 0:00:04.193 *********** 2025-07-04 18:31:05.335617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.335629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.335652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.335671 | orchestrator | 2025-07-04 18:31:05.335682 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-07-04 18:31:05.335693 | orchestrator | Friday 04 July 2025 18:28:48 +0000 (0:00:01.386) 0:00:05.580 *********** 2025-07-04 18:31:05.335711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-04 18:31:05.335723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-04 18:31:05.335735 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:31:05.335745 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:31:05.335757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-04 18:31:05.335769 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:31:05.335780 | orchestrator | 2025-07-04 18:31:05.335790 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-07-04 18:31:05.335801 | orchestrator | Friday 04 July 2025 18:28:48 +0000 (0:00:00.369) 0:00:05.950 *********** 2025-07-04 18:31:05.335812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-04 18:31:05.335831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-04 18:31:05.335843 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:31:05.335854 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:31:05.335873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-04 18:31:05.335885 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:31:05.335895 | orchestrator | 2025-07-04 18:31:05.335907 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-07-04 18:31:05.335917 | orchestrator | Friday 04 July 2025 18:28:49 +0000 (0:00:00.797) 0:00:06.748 *********** 2025-07-04 18:31:05.335934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.335945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.335957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.335977 | orchestrator | 2025-07-04 18:31:05.335988 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-07-04 18:31:05.335999 | orchestrator | Friday 04 July 2025 18:28:50 +0000 (0:00:01.262) 0:00:08.010 *********** 2025-07-04 18:31:05.336010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.336031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.336047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.336059 | orchestrator | 2025-07-04 18:31:05.336069 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-07-04 18:31:05.336080 | orchestrator | Friday 04 July 2025 18:28:52 +0000 (0:00:01.412) 0:00:09.423 *********** 2025-07-04 18:31:05.336091 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:31:05.336101 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:31:05.336112 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:31:05.336123 | orchestrator | 2025-07-04 18:31:05.336133 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-07-04 18:31:05.336144 | orchestrator | Friday 04 July 2025 18:28:52 +0000 (0:00:00.516) 0:00:09.939 *********** 2025-07-04 18:31:05.336155 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-04 18:31:05.336165 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-04 18:31:05.336176 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-04 18:31:05.336187 | orchestrator | 2025-07-04 18:31:05.336197 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-07-04 18:31:05.336208 | orchestrator | Friday 04 July 2025 18:28:54 +0000 (0:00:01.370) 0:00:11.310 *********** 2025-07-04 18:31:05.336219 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-04 18:31:05.336230 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-04 18:31:05.336241 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-04 18:31:05.336273 | orchestrator | 2025-07-04 18:31:05.336284 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-07-04 18:31:05.336295 | orchestrator | Friday 04 July 2025 18:28:55 +0000 (0:00:01.266) 0:00:12.576 *********** 2025-07-04 18:31:05.336306 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 18:31:05.336317 | orchestrator | 2025-07-04 18:31:05.336327 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-07-04 18:31:05.336338 | orchestrator | Friday 04 July 2025 18:28:56 +0000 (0:00:00.813) 0:00:13.390 *********** 2025-07-04 18:31:05.336349 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-07-04 18:31:05.336359 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-07-04 18:31:05.336370 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:31:05.336381 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:31:05.336392 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:31:05.336403 | orchestrator | 2025-07-04 18:31:05.336414 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-07-04 18:31:05.336424 | orchestrator | Friday 04 July 2025 18:28:57 +0000 (0:00:00.692) 0:00:14.083 *********** 2025-07-04 18:31:05.336456 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:31:05.336467 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:31:05.336478 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:31:05.336489 | orchestrator | 2025-07-04 18:31:05.336499 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-07-04 18:31:05.336510 | orchestrator | Friday 04 July 2025 18:28:57 +0000 (0:00:00.577) 0:00:14.661 *********** 2025-07-04 18:31:05.336523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090008, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6581573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090008, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6581573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090008, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6581573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090002, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6471572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090002, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6471572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090002, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6471572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1089999, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6441572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1089999, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6441572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1089999, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6441572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090005, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6541574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090005, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6541574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090005, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6541574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1089995, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6411572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1089995, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6411572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1089995, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6411572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090000, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6461573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090000, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6461573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090000, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6461573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090004, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6491573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090004, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6491573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090004, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6491573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1089994, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6401572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1089994, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6401572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1089994, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6401572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1089989, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.635157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1089989, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.635157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1089989, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.635157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1089996, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6411572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1089996, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6411572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1089996, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6411572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1089991, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.637157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1089991, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.637157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.336998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1089991, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.637157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090003, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6481574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090003, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6481574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090003, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6481574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1089997, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6441572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1089997, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6441572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1089997, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6441572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090007, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6561575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090007, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6561575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090007, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6561575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1089993, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.639157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1089993, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.639157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1089993, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.639157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090001, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6471572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090001, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6471572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090001, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6471572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1089990, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.637157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1089990, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.637157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1089990, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.637157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1089992, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6381571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1089992, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6381571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1089992, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6381571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1089998, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6441572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1089998, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6441572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1089998, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6441572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090034, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6801577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090034, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6801577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090034, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6801577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090024, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6711576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090024, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6711576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090024, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6711576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090010, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6591575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090010, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6591575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090010, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6591575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090052, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.691158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090052, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.691158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090052, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.691158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.337747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090011, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6601574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090011, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6601574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090011, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6601574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090047, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.686158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090047, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.686158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090047, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.686158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090053, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.696158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090053, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.696158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090053, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.696158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090043, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6821578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090043, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6821578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090043, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6821578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090046, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6841578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090046, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6841578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090046, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6841578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090012, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6611574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090012, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6611574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090012, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6611574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090026, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6721575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090026, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6721575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090026, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6721575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090058, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.697158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090058, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.697158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090058, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.697158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090048, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6881578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090048, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6881578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090048, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6881578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090015, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6651576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090015, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6651576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090015, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6651576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090013, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6611574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090013, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6611574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090013, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6611574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090018, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6661575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090018, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6661575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090018, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6661575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090020, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6711576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090020, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6711576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090020, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6711576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090031, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6731577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090045, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6821578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090031, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6731577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090031, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6731577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090032, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6751578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090045, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6821578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090045, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6821578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090060, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.698158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090032, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6751578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090032, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.6751578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090060, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.698158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090060, 'dev': 76, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751650761.698158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-04 18:31:05.338997 | orchestrator | 2025-07-04 18:31:05.339010 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-07-04 18:31:05.339023 | orchestrator | Friday 04 July 2025 18:29:34 +0000 (0:00:37.013) 0:00:51.674 *********** 2025-07-04 18:31:05.339042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.339061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.339076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-04 18:31:05.339088 | orchestrator | 2025-07-04 18:31:05.339101 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-07-04 18:31:05.339114 | orchestrator | Friday 04 July 2025 18:29:35 +0000 (0:00:00.950) 0:00:52.624 *********** 2025-07-04 18:31:05.339126 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:31:05.339147 | orchestrator | 2025-07-04 18:31:05.339160 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-07-04 18:31:05.339172 | orchestrator | Friday 04 July 2025 18:29:37 +0000 (0:00:02.123) 0:00:54.747 *********** 2025-07-04 18:31:05.339183 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:31:05.339194 | orchestrator | 2025-07-04 18:31:05.339204 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-04 18:31:05.339215 | orchestrator | Friday 04 July 2025 18:29:39 +0000 (0:00:02.115) 0:00:56.863 *********** 2025-07-04 18:31:05.339226 | orchestrator | 2025-07-04 18:31:05.339237 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-04 18:31:05.339247 | orchestrator | Friday 04 July 2025 18:29:40 +0000 (0:00:00.273) 0:00:57.136 *********** 2025-07-04 18:31:05.339258 | orchestrator | 2025-07-04 18:31:05.339269 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-04 18:31:05.339279 | orchestrator | Friday 04 July 2025 18:29:40 +0000 (0:00:00.079) 0:00:57.216 *********** 2025-07-04 18:31:05.339290 | orchestrator | 2025-07-04 18:31:05.339301 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-07-04 18:31:05.339312 | orchestrator | Friday 04 July 2025 18:29:40 +0000 (0:00:00.066) 0:00:57.282 *********** 2025-07-04 18:31:05.339322 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:31:05.339333 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:31:05.339344 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:31:05.339354 | orchestrator | 2025-07-04 18:31:05.339365 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-07-04 18:31:05.339376 | orchestrator | Friday 04 July 2025 18:29:47 +0000 (0:00:06.801) 0:01:04.084 *********** 2025-07-04 18:31:05.339386 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:31:05.339397 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:31:05.339408 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-07-04 18:31:05.339419 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-07-04 18:31:05.339449 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-07-04 18:31:05.339460 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:31:05.339471 | orchestrator | 2025-07-04 18:31:05.339482 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-07-04 18:31:05.339493 | orchestrator | Friday 04 July 2025 18:30:25 +0000 (0:00:38.649) 0:01:42.734 *********** 2025-07-04 18:31:05.339504 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:31:05.339515 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:31:05.339525 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:31:05.339536 | orchestrator | 2025-07-04 18:31:05.339547 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-07-04 18:31:05.339558 | orchestrator | Friday 04 July 2025 18:30:58 +0000 (0:00:32.399) 0:02:15.133 *********** 2025-07-04 18:31:05.339569 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:31:05.339579 | orchestrator | 2025-07-04 18:31:05.339590 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-07-04 18:31:05.339612 | orchestrator | Friday 04 July 2025 18:31:00 +0000 (0:00:02.439) 0:02:17.573 *********** 2025-07-04 18:31:05.339630 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:31:05.339649 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:31:05.339674 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:31:05.339698 | orchestrator | 2025-07-04 18:31:05.339713 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-07-04 18:31:05.339730 | orchestrator | Friday 04 July 2025 18:31:00 +0000 (0:00:00.454) 0:02:18.028 *********** 2025-07-04 18:31:05.339748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-07-04 18:31:05.339787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-07-04 18:31:05.339807 | orchestrator | 2025-07-04 18:31:05.339824 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-07-04 18:31:05.339840 | orchestrator | Friday 04 July 2025 18:31:03 +0000 (0:00:02.811) 0:02:20.839 *********** 2025-07-04 18:31:05.339857 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:31:05.339874 | orchestrator | 2025-07-04 18:31:05.339891 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:31:05.339909 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-04 18:31:05.339929 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-04 18:31:05.339946 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-04 18:31:05.339964 | orchestrator | 2025-07-04 18:31:05.339982 | orchestrator | 2025-07-04 18:31:05.340000 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:31:05.340017 | orchestrator | Friday 04 July 2025 18:31:04 +0000 (0:00:00.253) 0:02:21.093 *********** 2025-07-04 18:31:05.340036 | orchestrator | =============================================================================== 2025-07-04 18:31:05.340053 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.65s 2025-07-04 18:31:05.340071 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.01s 2025-07-04 18:31:05.340090 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.40s 2025-07-04 18:31:05.340110 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.80s 2025-07-04 18:31:05.340130 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.81s 2025-07-04 18:31:05.340150 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.44s 2025-07-04 18:31:05.340169 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.12s 2025-07-04 18:31:05.340188 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.12s 2025-07-04 18:31:05.340207 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.41s 2025-07-04 18:31:05.340226 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.39s 2025-07-04 18:31:05.340245 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.37s 2025-07-04 18:31:05.340266 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.27s 2025-07-04 18:31:05.340285 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.26s 2025-07-04 18:31:05.340303 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.13s 2025-07-04 18:31:05.340314 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.95s 2025-07-04 18:31:05.340325 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.81s 2025-07-04 18:31:05.340336 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.80s 2025-07-04 18:31:05.340346 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.80s 2025-07-04 18:31:05.340357 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.71s 2025-07-04 18:31:05.340368 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2025-07-04 18:31:05.340379 | orchestrator | 2025-07-04 18:31:05 | INFO  | Task 27c69a28-ef59-445a-a906-1b87b138db98 is in state SUCCESS 2025-07-04 18:31:05.340401 | orchestrator | 2025-07-04 18:31:05 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:05.340412 | orchestrator | 2025-07-04 18:31:05 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:08.373816 | orchestrator | 2025-07-04 18:31:08 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:08.373934 | orchestrator | 2025-07-04 18:31:08 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:11.427708 | orchestrator | 2025-07-04 18:31:11 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:11.427834 | orchestrator | 2025-07-04 18:31:11 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:14.478436 | orchestrator | 2025-07-04 18:31:14 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:14.478557 | orchestrator | 2025-07-04 18:31:14 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:17.517984 | orchestrator | 2025-07-04 18:31:17 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:17.518130 | orchestrator | 2025-07-04 18:31:17 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:20.567804 | orchestrator | 2025-07-04 18:31:20 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:20.567912 | orchestrator | 2025-07-04 18:31:20 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:23.611109 | orchestrator | 2025-07-04 18:31:23 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:23.611232 | orchestrator | 2025-07-04 18:31:23 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:26.660051 | orchestrator | 2025-07-04 18:31:26 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:26.660158 | orchestrator | 2025-07-04 18:31:26 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:29.715937 | orchestrator | 2025-07-04 18:31:29 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:29.716158 | orchestrator | 2025-07-04 18:31:29 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:32.763765 | orchestrator | 2025-07-04 18:31:32 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:32.763869 | orchestrator | 2025-07-04 18:31:32 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:35.800423 | orchestrator | 2025-07-04 18:31:35 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:35.800608 | orchestrator | 2025-07-04 18:31:35 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:38.847058 | orchestrator | 2025-07-04 18:31:38 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:38.847137 | orchestrator | 2025-07-04 18:31:38 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:41.894897 | orchestrator | 2025-07-04 18:31:41 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:41.894981 | orchestrator | 2025-07-04 18:31:41 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:44.932896 | orchestrator | 2025-07-04 18:31:44 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:44.932971 | orchestrator | 2025-07-04 18:31:44 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:47.969907 | orchestrator | 2025-07-04 18:31:47 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:47.970095 | orchestrator | 2025-07-04 18:31:47 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:51.002925 | orchestrator | 2025-07-04 18:31:50 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:51.003033 | orchestrator | 2025-07-04 18:31:51 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:54.074153 | orchestrator | 2025-07-04 18:31:54 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:54.074274 | orchestrator | 2025-07-04 18:31:54 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:31:57.133163 | orchestrator | 2025-07-04 18:31:57 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:31:57.133274 | orchestrator | 2025-07-04 18:31:57 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:00.187063 | orchestrator | 2025-07-04 18:32:00 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:00.187187 | orchestrator | 2025-07-04 18:32:00 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:03.232842 | orchestrator | 2025-07-04 18:32:03 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:03.232946 | orchestrator | 2025-07-04 18:32:03 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:06.276317 | orchestrator | 2025-07-04 18:32:06 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:06.276412 | orchestrator | 2025-07-04 18:32:06 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:09.315607 | orchestrator | 2025-07-04 18:32:09 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:09.315731 | orchestrator | 2025-07-04 18:32:09 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:12.362526 | orchestrator | 2025-07-04 18:32:12 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:12.362609 | orchestrator | 2025-07-04 18:32:12 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:15.417398 | orchestrator | 2025-07-04 18:32:15 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:15.417502 | orchestrator | 2025-07-04 18:32:15 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:18.458939 | orchestrator | 2025-07-04 18:32:18 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:18.459041 | orchestrator | 2025-07-04 18:32:18 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:21.507831 | orchestrator | 2025-07-04 18:32:21 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:21.507938 | orchestrator | 2025-07-04 18:32:21 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:24.553241 | orchestrator | 2025-07-04 18:32:24 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:24.553350 | orchestrator | 2025-07-04 18:32:24 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:27.607320 | orchestrator | 2025-07-04 18:32:27 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:27.607435 | orchestrator | 2025-07-04 18:32:27 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:30.658131 | orchestrator | 2025-07-04 18:32:30 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:30.658236 | orchestrator | 2025-07-04 18:32:30 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:33.702355 | orchestrator | 2025-07-04 18:32:33 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:33.702551 | orchestrator | 2025-07-04 18:32:33 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:36.751984 | orchestrator | 2025-07-04 18:32:36 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:36.752113 | orchestrator | 2025-07-04 18:32:36 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:39.802778 | orchestrator | 2025-07-04 18:32:39 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:39.802909 | orchestrator | 2025-07-04 18:32:39 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:42.852377 | orchestrator | 2025-07-04 18:32:42 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:42.852473 | orchestrator | 2025-07-04 18:32:42 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:45.903526 | orchestrator | 2025-07-04 18:32:45 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:45.903630 | orchestrator | 2025-07-04 18:32:45 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:48.958165 | orchestrator | 2025-07-04 18:32:48 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:48.958273 | orchestrator | 2025-07-04 18:32:48 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:51.999976 | orchestrator | 2025-07-04 18:32:51 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:52.000086 | orchestrator | 2025-07-04 18:32:51 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:55.042920 | orchestrator | 2025-07-04 18:32:55 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:55.043032 | orchestrator | 2025-07-04 18:32:55 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:32:58.093414 | orchestrator | 2025-07-04 18:32:58 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:32:58.093545 | orchestrator | 2025-07-04 18:32:58 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:01.142823 | orchestrator | 2025-07-04 18:33:01 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:01.142924 | orchestrator | 2025-07-04 18:33:01 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:04.192192 | orchestrator | 2025-07-04 18:33:04 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:04.192295 | orchestrator | 2025-07-04 18:33:04 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:07.247948 | orchestrator | 2025-07-04 18:33:07 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:07.248063 | orchestrator | 2025-07-04 18:33:07 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:10.304853 | orchestrator | 2025-07-04 18:33:10 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:10.304965 | orchestrator | 2025-07-04 18:33:10 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:13.349487 | orchestrator | 2025-07-04 18:33:13 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:13.349615 | orchestrator | 2025-07-04 18:33:13 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:16.400187 | orchestrator | 2025-07-04 18:33:16 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:16.400318 | orchestrator | 2025-07-04 18:33:16 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:19.444383 | orchestrator | 2025-07-04 18:33:19 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:19.444512 | orchestrator | 2025-07-04 18:33:19 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:22.492539 | orchestrator | 2025-07-04 18:33:22 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:22.492640 | orchestrator | 2025-07-04 18:33:22 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:25.540059 | orchestrator | 2025-07-04 18:33:25 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:25.540144 | orchestrator | 2025-07-04 18:33:25 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:28.586701 | orchestrator | 2025-07-04 18:33:28 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:28.586810 | orchestrator | 2025-07-04 18:33:28 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:31.660357 | orchestrator | 2025-07-04 18:33:31 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:31.660445 | orchestrator | 2025-07-04 18:33:31 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:34.711558 | orchestrator | 2025-07-04 18:33:34 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:34.711711 | orchestrator | 2025-07-04 18:33:34 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:37.766536 | orchestrator | 2025-07-04 18:33:37 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:37.766617 | orchestrator | 2025-07-04 18:33:37 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:40.817005 | orchestrator | 2025-07-04 18:33:40 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:40.817105 | orchestrator | 2025-07-04 18:33:40 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:43.868566 | orchestrator | 2025-07-04 18:33:43 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:43.868667 | orchestrator | 2025-07-04 18:33:43 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:46.919031 | orchestrator | 2025-07-04 18:33:46 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:46.919771 | orchestrator | 2025-07-04 18:33:46 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:49.965566 | orchestrator | 2025-07-04 18:33:49 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:49.965699 | orchestrator | 2025-07-04 18:33:49 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:53.016371 | orchestrator | 2025-07-04 18:33:53 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:53.016477 | orchestrator | 2025-07-04 18:33:53 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:56.073464 | orchestrator | 2025-07-04 18:33:56 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:56.073566 | orchestrator | 2025-07-04 18:33:56 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:33:59.124440 | orchestrator | 2025-07-04 18:33:59 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:33:59.124541 | orchestrator | 2025-07-04 18:33:59 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:02.167398 | orchestrator | 2025-07-04 18:34:02 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:02.167496 | orchestrator | 2025-07-04 18:34:02 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:05.211276 | orchestrator | 2025-07-04 18:34:05 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:05.211408 | orchestrator | 2025-07-04 18:34:05 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:08.254414 | orchestrator | 2025-07-04 18:34:08 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:08.254524 | orchestrator | 2025-07-04 18:34:08 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:11.301630 | orchestrator | 2025-07-04 18:34:11 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:11.301726 | orchestrator | 2025-07-04 18:34:11 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:14.344884 | orchestrator | 2025-07-04 18:34:14 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:14.345079 | orchestrator | 2025-07-04 18:34:14 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:17.392059 | orchestrator | 2025-07-04 18:34:17 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:17.392795 | orchestrator | 2025-07-04 18:34:17 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:20.434659 | orchestrator | 2025-07-04 18:34:20 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:20.434772 | orchestrator | 2025-07-04 18:34:20 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:23.480118 | orchestrator | 2025-07-04 18:34:23 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:23.480206 | orchestrator | 2025-07-04 18:34:23 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:26.529674 | orchestrator | 2025-07-04 18:34:26 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:26.529789 | orchestrator | 2025-07-04 18:34:26 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:29.572474 | orchestrator | 2025-07-04 18:34:29 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:29.572577 | orchestrator | 2025-07-04 18:34:29 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:32.622459 | orchestrator | 2025-07-04 18:34:32 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:32.622647 | orchestrator | 2025-07-04 18:34:32 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:35.663412 | orchestrator | 2025-07-04 18:34:35 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:35.663545 | orchestrator | 2025-07-04 18:34:35 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:38.712882 | orchestrator | 2025-07-04 18:34:38 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:38.712983 | orchestrator | 2025-07-04 18:34:38 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:41.761412 | orchestrator | 2025-07-04 18:34:41 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:41.761523 | orchestrator | 2025-07-04 18:34:41 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:44.812430 | orchestrator | 2025-07-04 18:34:44 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:44.812520 | orchestrator | 2025-07-04 18:34:44 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:47.847759 | orchestrator | 2025-07-04 18:34:47 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:47.847866 | orchestrator | 2025-07-04 18:34:47 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:50.883366 | orchestrator | 2025-07-04 18:34:50 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:50.883501 | orchestrator | 2025-07-04 18:34:50 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:53.934978 | orchestrator | 2025-07-04 18:34:53 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:53.935141 | orchestrator | 2025-07-04 18:34:53 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:34:56.983427 | orchestrator | 2025-07-04 18:34:56 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:34:56.983534 | orchestrator | 2025-07-04 18:34:56 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:35:00.031611 | orchestrator | 2025-07-04 18:35:00 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:35:00.031705 | orchestrator | 2025-07-04 18:35:00 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:35:03.080465 | orchestrator | 2025-07-04 18:35:03 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:35:03.080571 | orchestrator | 2025-07-04 18:35:03 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:35:06.124905 | orchestrator | 2025-07-04 18:35:06 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:35:06.125044 | orchestrator | 2025-07-04 18:35:06 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:35:09.175031 | orchestrator | 2025-07-04 18:35:09 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:35:09.175181 | orchestrator | 2025-07-04 18:35:09 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:35:12.224094 | orchestrator | 2025-07-04 18:35:12 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:35:12.224289 | orchestrator | 2025-07-04 18:35:12 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:35:15.268904 | orchestrator | 2025-07-04 18:35:15 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:35:15.269000 | orchestrator | 2025-07-04 18:35:15 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:35:18.311882 | orchestrator | 2025-07-04 18:35:18 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:35:18.312004 | orchestrator | 2025-07-04 18:35:18 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:35:21.367660 | orchestrator | 2025-07-04 18:35:21 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:35:21.367785 | orchestrator | 2025-07-04 18:35:21 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:35:24.407601 | orchestrator | 2025-07-04 18:35:24 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state STARTED 2025-07-04 18:35:24.407709 | orchestrator | 2025-07-04 18:35:24 | INFO  | Wait 1 second(s) until the next check 2025-07-04 18:35:27.456799 | orchestrator | 2025-07-04 18:35:27 | INFO  | Task 0286d4cd-37da-4905-b02c-80661dc010e4 is in state SUCCESS 2025-07-04 18:35:27.458537 | orchestrator | 2025-07-04 18:35:27.458582 | orchestrator | 2025-07-04 18:35:27.458594 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:35:27.458607 | orchestrator | 2025-07-04 18:35:27.458618 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-07-04 18:35:27.458629 | orchestrator | Friday 04 July 2025 18:26:39 +0000 (0:00:01.315) 0:00:01.315 *********** 2025-07-04 18:35:27.458641 | orchestrator | changed: [testbed-manager] 2025-07-04 18:35:27.458654 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.458665 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:35:27.458675 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:35:27.458686 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:35:27.458722 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:35:27.458733 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:35:27.458744 | orchestrator | 2025-07-04 18:35:27.458754 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:35:27.458765 | orchestrator | Friday 04 July 2025 18:26:41 +0000 (0:00:02.319) 0:00:03.634 *********** 2025-07-04 18:35:27.458854 | orchestrator | changed: [testbed-manager] 2025-07-04 18:35:27.458866 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.458877 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:35:27.458900 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:35:27.458911 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:35:27.458922 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:35:27.458933 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:35:27.458944 | orchestrator | 2025-07-04 18:35:27.458970 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:35:27.458992 | orchestrator | Friday 04 July 2025 18:26:42 +0000 (0:00:00.814) 0:00:04.448 *********** 2025-07-04 18:35:27.459023 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-07-04 18:35:27.459035 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-07-04 18:35:27.459045 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-07-04 18:35:27.459056 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-07-04 18:35:27.459067 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-07-04 18:35:27.459080 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-07-04 18:35:27.459092 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-07-04 18:35:27.459104 | orchestrator | 2025-07-04 18:35:27.459116 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-07-04 18:35:27.459129 | orchestrator | 2025-07-04 18:35:27.459168 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-04 18:35:27.459188 | orchestrator | Friday 04 July 2025 18:26:44 +0000 (0:00:01.535) 0:00:05.984 *********** 2025-07-04 18:35:27.459206 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:35:27.459227 | orchestrator | 2025-07-04 18:35:27.459244 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-07-04 18:35:27.459263 | orchestrator | Friday 04 July 2025 18:26:45 +0000 (0:00:01.281) 0:00:07.265 *********** 2025-07-04 18:35:27.459278 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-07-04 18:35:27.459291 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-07-04 18:35:27.459303 | orchestrator | 2025-07-04 18:35:27.459315 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-07-04 18:35:27.459328 | orchestrator | Friday 04 July 2025 18:26:50 +0000 (0:00:04.824) 0:00:12.089 *********** 2025-07-04 18:35:27.459341 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-04 18:35:27.459353 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-04 18:35:27.459365 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.459377 | orchestrator | 2025-07-04 18:35:27.459390 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-04 18:35:27.459402 | orchestrator | Friday 04 July 2025 18:26:54 +0000 (0:00:04.308) 0:00:16.398 *********** 2025-07-04 18:35:27.459414 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.459426 | orchestrator | 2025-07-04 18:35:27.459438 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-07-04 18:35:27.459448 | orchestrator | Friday 04 July 2025 18:26:55 +0000 (0:00:00.677) 0:00:17.076 *********** 2025-07-04 18:35:27.459459 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.459469 | orchestrator | 2025-07-04 18:35:27.459480 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-07-04 18:35:27.459490 | orchestrator | Friday 04 July 2025 18:26:56 +0000 (0:00:01.387) 0:00:18.463 *********** 2025-07-04 18:35:27.459502 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.459522 | orchestrator | 2025-07-04 18:35:27.459547 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-04 18:35:27.459558 | orchestrator | Friday 04 July 2025 18:26:59 +0000 (0:00:02.347) 0:00:20.811 *********** 2025-07-04 18:35:27.459569 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.459580 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.459590 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.459601 | orchestrator | 2025-07-04 18:35:27.459612 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-04 18:35:27.459622 | orchestrator | Friday 04 July 2025 18:26:59 +0000 (0:00:00.304) 0:00:21.116 *********** 2025-07-04 18:35:27.459633 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:35:27.459644 | orchestrator | 2025-07-04 18:35:27.459654 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-07-04 18:35:27.459665 | orchestrator | Friday 04 July 2025 18:27:31 +0000 (0:00:32.166) 0:00:53.282 *********** 2025-07-04 18:35:27.459675 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.459686 | orchestrator | 2025-07-04 18:35:27.459696 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-04 18:35:27.459707 | orchestrator | Friday 04 July 2025 18:27:46 +0000 (0:00:14.476) 0:01:07.758 *********** 2025-07-04 18:35:27.459718 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:35:27.459728 | orchestrator | 2025-07-04 18:35:27.459739 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-04 18:35:27.459749 | orchestrator | Friday 04 July 2025 18:27:58 +0000 (0:00:12.253) 0:01:20.011 *********** 2025-07-04 18:35:27.459776 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:35:27.459788 | orchestrator | 2025-07-04 18:35:27.459799 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-07-04 18:35:27.459810 | orchestrator | Friday 04 July 2025 18:27:59 +0000 (0:00:00.923) 0:01:20.935 *********** 2025-07-04 18:35:27.459820 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.459831 | orchestrator | 2025-07-04 18:35:27.459841 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-04 18:35:27.459852 | orchestrator | Friday 04 July 2025 18:27:59 +0000 (0:00:00.459) 0:01:21.395 *********** 2025-07-04 18:35:27.459863 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:35:27.459874 | orchestrator | 2025-07-04 18:35:27.459885 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-04 18:35:27.459895 | orchestrator | Friday 04 July 2025 18:28:00 +0000 (0:00:00.516) 0:01:21.911 *********** 2025-07-04 18:35:27.459906 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:35:27.459917 | orchestrator | 2025-07-04 18:35:27.459927 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-04 18:35:27.459938 | orchestrator | Friday 04 July 2025 18:28:18 +0000 (0:00:18.067) 0:01:39.979 *********** 2025-07-04 18:35:27.459949 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.459959 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.459970 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.459981 | orchestrator | 2025-07-04 18:35:27.459991 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-07-04 18:35:27.460002 | orchestrator | 2025-07-04 18:35:27.460013 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-04 18:35:27.460023 | orchestrator | Friday 04 July 2025 18:28:18 +0000 (0:00:00.323) 0:01:40.303 *********** 2025-07-04 18:35:27.460034 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:35:27.460045 | orchestrator | 2025-07-04 18:35:27.460055 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-07-04 18:35:27.460066 | orchestrator | Friday 04 July 2025 18:28:19 +0000 (0:00:00.580) 0:01:40.883 *********** 2025-07-04 18:35:27.460077 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.460088 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.460098 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.460116 | orchestrator | 2025-07-04 18:35:27.460126 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-07-04 18:35:27.460137 | orchestrator | Friday 04 July 2025 18:28:21 +0000 (0:00:02.228) 0:01:43.112 *********** 2025-07-04 18:35:27.460223 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.460235 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.460246 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.460256 | orchestrator | 2025-07-04 18:35:27.460267 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-04 18:35:27.460278 | orchestrator | Friday 04 July 2025 18:28:23 +0000 (0:00:02.229) 0:01:45.342 *********** 2025-07-04 18:35:27.460289 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.460299 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.460310 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.460321 | orchestrator | 2025-07-04 18:35:27.460331 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-04 18:35:27.460341 | orchestrator | Friday 04 July 2025 18:28:23 +0000 (0:00:00.300) 0:01:45.642 *********** 2025-07-04 18:35:27.460351 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-04 18:35:27.460360 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.460370 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-04 18:35:27.460379 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.460389 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-04 18:35:27.460399 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-07-04 18:35:27.460408 | orchestrator | 2025-07-04 18:35:27.460418 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-04 18:35:27.460427 | orchestrator | Friday 04 July 2025 18:28:33 +0000 (0:00:09.678) 0:01:55.321 *********** 2025-07-04 18:35:27.460437 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.460446 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.460456 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.460465 | orchestrator | 2025-07-04 18:35:27.460475 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-04 18:35:27.460484 | orchestrator | Friday 04 July 2025 18:28:33 +0000 (0:00:00.306) 0:01:55.627 *********** 2025-07-04 18:35:27.460494 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-04 18:35:27.460509 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.460520 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-04 18:35:27.460529 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.460538 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-04 18:35:27.460548 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.460557 | orchestrator | 2025-07-04 18:35:27.460567 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-04 18:35:27.460576 | orchestrator | Friday 04 July 2025 18:28:34 +0000 (0:00:00.615) 0:01:56.243 *********** 2025-07-04 18:35:27.460586 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.460595 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.460605 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.460614 | orchestrator | 2025-07-04 18:35:27.460624 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-07-04 18:35:27.460633 | orchestrator | Friday 04 July 2025 18:28:35 +0000 (0:00:00.567) 0:01:56.810 *********** 2025-07-04 18:35:27.460643 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.460652 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.460661 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.460671 | orchestrator | 2025-07-04 18:35:27.460680 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-07-04 18:35:27.460690 | orchestrator | Friday 04 July 2025 18:28:36 +0000 (0:00:01.216) 0:01:58.027 *********** 2025-07-04 18:35:27.460700 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.460709 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.460732 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.460742 | orchestrator | 2025-07-04 18:35:27.460752 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-07-04 18:35:27.460762 | orchestrator | Friday 04 July 2025 18:28:38 +0000 (0:00:02.300) 0:02:00.328 *********** 2025-07-04 18:35:27.460771 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.460781 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.460790 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:35:27.460800 | orchestrator | 2025-07-04 18:35:27.460809 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-04 18:35:27.460819 | orchestrator | Friday 04 July 2025 18:28:59 +0000 (0:00:21.231) 0:02:21.559 *********** 2025-07-04 18:35:27.460828 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.460838 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.460847 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:35:27.460857 | orchestrator | 2025-07-04 18:35:27.460866 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-04 18:35:27.460876 | orchestrator | Friday 04 July 2025 18:29:11 +0000 (0:00:12.161) 0:02:33.720 *********** 2025-07-04 18:35:27.460885 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:35:27.460895 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.460904 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.460914 | orchestrator | 2025-07-04 18:35:27.460923 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-07-04 18:35:27.460933 | orchestrator | Friday 04 July 2025 18:29:12 +0000 (0:00:00.935) 0:02:34.656 *********** 2025-07-04 18:35:27.460942 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.460951 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.460961 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.460970 | orchestrator | 2025-07-04 18:35:27.460980 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-07-04 18:35:27.460989 | orchestrator | Friday 04 July 2025 18:29:24 +0000 (0:00:11.795) 0:02:46.451 *********** 2025-07-04 18:35:27.460999 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.461008 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.461018 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.461027 | orchestrator | 2025-07-04 18:35:27.461037 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-04 18:35:27.461046 | orchestrator | Friday 04 July 2025 18:29:26 +0000 (0:00:01.653) 0:02:48.105 *********** 2025-07-04 18:35:27.461056 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.461065 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.461075 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.461084 | orchestrator | 2025-07-04 18:35:27.461094 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-07-04 18:35:27.461103 | orchestrator | 2025-07-04 18:35:27.461113 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-04 18:35:27.461122 | orchestrator | Friday 04 July 2025 18:29:26 +0000 (0:00:00.319) 0:02:48.424 *********** 2025-07-04 18:35:27.461132 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:35:27.461163 | orchestrator | 2025-07-04 18:35:27.461174 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-07-04 18:35:27.461184 | orchestrator | Friday 04 July 2025 18:29:27 +0000 (0:00:00.539) 0:02:48.964 *********** 2025-07-04 18:35:27.461193 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-07-04 18:35:27.461203 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-07-04 18:35:27.461212 | orchestrator | 2025-07-04 18:35:27.461221 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-07-04 18:35:27.461231 | orchestrator | Friday 04 July 2025 18:29:30 +0000 (0:00:03.225) 0:02:52.190 *********** 2025-07-04 18:35:27.461241 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-07-04 18:35:27.461259 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-07-04 18:35:27.461268 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-07-04 18:35:27.461278 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-07-04 18:35:27.461288 | orchestrator | 2025-07-04 18:35:27.461297 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-07-04 18:35:27.461312 | orchestrator | Friday 04 July 2025 18:29:36 +0000 (0:00:06.321) 0:02:58.511 *********** 2025-07-04 18:35:27.461322 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-04 18:35:27.461331 | orchestrator | 2025-07-04 18:35:27.461340 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-07-04 18:35:27.461350 | orchestrator | Friday 04 July 2025 18:29:40 +0000 (0:00:03.301) 0:03:01.813 *********** 2025-07-04 18:35:27.461359 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-04 18:35:27.461369 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-07-04 18:35:27.461378 | orchestrator | 2025-07-04 18:35:27.461388 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-07-04 18:35:27.461397 | orchestrator | Friday 04 July 2025 18:29:43 +0000 (0:00:03.771) 0:03:05.585 *********** 2025-07-04 18:35:27.461407 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-04 18:35:27.461416 | orchestrator | 2025-07-04 18:35:27.461426 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-07-04 18:35:27.461435 | orchestrator | Friday 04 July 2025 18:29:47 +0000 (0:00:03.212) 0:03:08.798 *********** 2025-07-04 18:35:27.461445 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-07-04 18:35:27.461454 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-07-04 18:35:27.461463 | orchestrator | 2025-07-04 18:35:27.461473 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-04 18:35:27.461488 | orchestrator | Friday 04 July 2025 18:29:54 +0000 (0:00:07.579) 0:03:16.377 *********** 2025-07-04 18:35:27.461504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.461520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.461543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.461563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.461576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.461587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.461597 | orchestrator | 2025-07-04 18:35:27.461607 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-07-04 18:35:27.461617 | orchestrator | Friday 04 July 2025 18:29:55 +0000 (0:00:01.164) 0:03:17.542 *********** 2025-07-04 18:35:27.461633 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.461643 | orchestrator | 2025-07-04 18:35:27.461653 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-07-04 18:35:27.461662 | orchestrator | Friday 04 July 2025 18:29:55 +0000 (0:00:00.129) 0:03:17.671 *********** 2025-07-04 18:35:27.461672 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.461681 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.461691 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.461700 | orchestrator | 2025-07-04 18:35:27.461710 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-07-04 18:35:27.461720 | orchestrator | Friday 04 July 2025 18:29:56 +0000 (0:00:00.428) 0:03:18.100 *********** 2025-07-04 18:35:27.461729 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-04 18:35:27.461738 | orchestrator | 2025-07-04 18:35:27.461748 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-07-04 18:35:27.461757 | orchestrator | Friday 04 July 2025 18:29:56 +0000 (0:00:00.626) 0:03:18.727 *********** 2025-07-04 18:35:27.461767 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.461776 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.461786 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.461795 | orchestrator | 2025-07-04 18:35:27.461805 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-04 18:35:27.461814 | orchestrator | Friday 04 July 2025 18:29:57 +0000 (0:00:00.278) 0:03:19.005 *********** 2025-07-04 18:35:27.461823 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:35:27.461833 | orchestrator | 2025-07-04 18:35:27.461842 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-04 18:35:27.461852 | orchestrator | Friday 04 July 2025 18:29:57 +0000 (0:00:00.585) 0:03:19.591 *********** 2025-07-04 18:35:27.461872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.461885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.461903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.461919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.461930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.461949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.461959 | orchestrator | 2025-07-04 18:35:27.461969 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-04 18:35:27.461979 | orchestrator | Friday 04 July 2025 18:30:00 +0000 (0:00:02.227) 0:03:21.818 *********** 2025-07-04 18:35:27.461990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-04 18:35:27.462058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.462072 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.462089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-04 18:35:27.462107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.462117 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.462128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-04 18:35:27.462185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.462198 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.462208 | orchestrator | 2025-07-04 18:35:27.462217 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-04 18:35:27.462227 | orchestrator | Friday 04 July 2025 18:30:00 +0000 (0:00:00.560) 0:03:22.378 *********** 2025-07-04 18:35:27.462242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-04 18:35:27.462253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.462263 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.462280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'2025-07-04 18:35:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:35:27.462956 | orchestrator | container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-04 18:35:27.463077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.463094 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.463124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-04 18:35:27.463139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.463234 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.463246 | orchestrator | 2025-07-04 18:35:27.463258 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-07-04 18:35:27.463270 | orchestrator | Friday 04 July 2025 18:30:01 +0000 (0:00:00.843) 0:03:23.222 *********** 2025-07-04 18:35:27.463303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.463328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.463342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.463363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.463385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.463431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.463444 | orchestrator | 2025-07-04 18:35:27.463455 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-07-04 18:35:27.463467 | orchestrator | Friday 04 July 2025 18:30:03 +0000 (0:00:02.310) 0:03:25.533 *********** 2025-07-04 18:35:27.463481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.463501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.463532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.463546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.463559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.463572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.463585 | orchestrator | 2025-07-04 18:35:27.463597 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-07-04 18:35:27.463610 | orchestrator | Friday 04 July 2025 18:30:09 +0000 (0:00:05.535) 0:03:31.069 *********** 2025-07-04 18:35:27.463636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-04 18:35:27.463658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.463671 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.463686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-04 18:35:27.463701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.463714 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.463733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-04 18:35:27.463776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.463790 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.463802 | orchestrator | 2025-07-04 18:35:27.463815 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-07-04 18:35:27.463827 | orchestrator | Friday 04 July 2025 18:30:09 +0000 (0:00:00.581) 0:03:31.651 *********** 2025-07-04 18:35:27.463839 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.463850 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:35:27.463861 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:35:27.463872 | orchestrator | 2025-07-04 18:35:27.463883 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-07-04 18:35:27.463894 | orchestrator | Friday 04 July 2025 18:30:12 +0000 (0:00:02.229) 0:03:33.880 *********** 2025-07-04 18:35:27.463904 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.463915 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.463926 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.463937 | orchestrator | 2025-07-04 18:35:27.463948 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-07-04 18:35:27.463958 | orchestrator | Friday 04 July 2025 18:30:12 +0000 (0:00:00.322) 0:03:34.203 *********** 2025-07-04 18:35:27.463970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.463993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.464045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-04 18:35:27.464075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.464095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.464114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.464170 | orchestrator | 2025-07-04 18:35:27.464189 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-04 18:35:27.464207 | orchestrator | Friday 04 July 2025 18:30:14 +0000 (0:00:01.912) 0:03:36.116 *********** 2025-07-04 18:35:27.464224 | orchestrator | 2025-07-04 18:35:27.464240 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-04 18:35:27.464257 | orchestrator | Friday 04 July 2025 18:30:14 +0000 (0:00:00.154) 0:03:36.270 *********** 2025-07-04 18:35:27.464274 | orchestrator | 2025-07-04 18:35:27.464293 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-04 18:35:27.464329 | orchestrator | Friday 04 July 2025 18:30:14 +0000 (0:00:00.129) 0:03:36.400 *********** 2025-07-04 18:35:27.464350 | orchestrator | 2025-07-04 18:35:27.464367 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-07-04 18:35:27.464384 | orchestrator | Friday 04 July 2025 18:30:14 +0000 (0:00:00.303) 0:03:36.703 *********** 2025-07-04 18:35:27.464404 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.464423 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:35:27.464442 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:35:27.464459 | orchestrator | 2025-07-04 18:35:27.464480 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-07-04 18:35:27.464500 | orchestrator | Friday 04 July 2025 18:30:38 +0000 (0:00:23.658) 0:04:00.362 *********** 2025-07-04 18:35:27.464519 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.464537 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:35:27.464555 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:35:27.464573 | orchestrator | 2025-07-04 18:35:27.464592 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-07-04 18:35:27.464611 | orchestrator | 2025-07-04 18:35:27.464629 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-04 18:35:27.464647 | orchestrator | Friday 04 July 2025 18:30:49 +0000 (0:00:10.889) 0:04:11.251 *********** 2025-07-04 18:35:27.464666 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:35:27.464687 | orchestrator | 2025-07-04 18:35:27.464721 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-04 18:35:27.464741 | orchestrator | Friday 04 July 2025 18:30:50 +0000 (0:00:01.254) 0:04:12.506 *********** 2025-07-04 18:35:27.464759 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.464778 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.464795 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.464815 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.464834 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.464850 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.464870 | orchestrator | 2025-07-04 18:35:27.464889 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-07-04 18:35:27.464909 | orchestrator | Friday 04 July 2025 18:30:51 +0000 (0:00:00.771) 0:04:13.277 *********** 2025-07-04 18:35:27.464929 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.464947 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.464965 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.464985 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:35:27.465004 | orchestrator | 2025-07-04 18:35:27.465023 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-04 18:35:27.465042 | orchestrator | Friday 04 July 2025 18:30:52 +0000 (0:00:01.039) 0:04:14.316 *********** 2025-07-04 18:35:27.465061 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-07-04 18:35:27.465076 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-07-04 18:35:27.465095 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-07-04 18:35:27.465114 | orchestrator | 2025-07-04 18:35:27.465130 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-04 18:35:27.465197 | orchestrator | Friday 04 July 2025 18:30:53 +0000 (0:00:00.813) 0:04:15.129 *********** 2025-07-04 18:35:27.465219 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-07-04 18:35:27.465237 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-07-04 18:35:27.465256 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-07-04 18:35:27.465274 | orchestrator | 2025-07-04 18:35:27.465291 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-04 18:35:27.465311 | orchestrator | Friday 04 July 2025 18:30:54 +0000 (0:00:01.185) 0:04:16.315 *********** 2025-07-04 18:35:27.465329 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-07-04 18:35:27.465348 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.465368 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-07-04 18:35:27.465388 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.465407 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-07-04 18:35:27.465425 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.465443 | orchestrator | 2025-07-04 18:35:27.465462 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-07-04 18:35:27.465481 | orchestrator | Friday 04 July 2025 18:30:55 +0000 (0:00:00.808) 0:04:17.124 *********** 2025-07-04 18:35:27.465500 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-04 18:35:27.465518 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-04 18:35:27.465536 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.465553 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-04 18:35:27.465570 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-04 18:35:27.465588 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-04 18:35:27.465607 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.465626 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-04 18:35:27.465645 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-04 18:35:27.465665 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.465684 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-04 18:35:27.465702 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-04 18:35:27.465721 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-04 18:35:27.465752 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-04 18:35:27.465773 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-04 18:35:27.465793 | orchestrator | 2025-07-04 18:35:27.465811 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-07-04 18:35:27.465830 | orchestrator | Friday 04 July 2025 18:30:57 +0000 (0:00:02.051) 0:04:19.175 *********** 2025-07-04 18:35:27.465848 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.465866 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.465885 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.465904 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:35:27.465923 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:35:27.465942 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:35:27.465957 | orchestrator | 2025-07-04 18:35:27.465969 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-07-04 18:35:27.465980 | orchestrator | Friday 04 July 2025 18:30:58 +0000 (0:00:01.432) 0:04:20.608 *********** 2025-07-04 18:35:27.465991 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.466002 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.466013 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.466088 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:35:27.466113 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:35:27.466125 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:35:27.466136 | orchestrator | 2025-07-04 18:35:27.466167 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-04 18:35:27.466180 | orchestrator | Friday 04 July 2025 18:31:00 +0000 (0:00:01.683) 0:04:22.292 *********** 2025-07-04 18:35:27.466212 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466239 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466352 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466434 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466446 | orchestrator | 2025-07-04 18:35:27.466457 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-04 18:35:27.466468 | orchestrator | Friday 04 July 2025 18:31:03 +0000 (0:00:02.722) 0:04:25.015 *********** 2025-07-04 18:35:27.466480 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:35:27.466492 | orchestrator | 2025-07-04 18:35:27.466503 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-04 18:35:27.466514 | orchestrator | Friday 04 July 2025 18:31:04 +0000 (0:00:01.232) 0:04:26.247 *********** 2025-07-04 18:35:27.466531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466571 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466646 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466677 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.466777 | orchestrator | 2025-07-04 18:35:27.466788 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-04 18:35:27.466799 | orchestrator | Friday 04 July 2025 18:31:08 +0000 (0:00:03.678) 0:04:29.925 *********** 2025-07-04 18:35:27.466811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-04 18:35:27.466823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-04 18:35:27.466851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.466863 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.466883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-04 18:35:27.466895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-04 18:35:27.466907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.466918 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.466930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-04 18:35:27.466953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-04 18:35:27.466965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.466977 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.466996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-04 18:35:27.467007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.467019 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.467030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-04 18:35:27.467042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.467060 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.467076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-04 18:35:27.467087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.467099 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.467110 | orchestrator | 2025-07-04 18:35:27.467121 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-04 18:35:27.467133 | orchestrator | Friday 04 July 2025 18:31:09 +0000 (0:00:01.689) 0:04:31.615 *********** 2025-07-04 18:35:27.467172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-04 18:35:27.467185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-04 18:35:27.467197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-04 18:35:27.467216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.467232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-04 18:35:27.467244 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.467262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.467273 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.467285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-04 18:35:27.467296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-04 18:35:27.467314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.467326 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.467342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-04 18:35:27.467355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.467366 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.467384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-04 18:35:27.467396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.467407 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.467419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-04 18:35:27.467437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.467448 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.467459 | orchestrator | 2025-07-04 18:35:27.467470 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-04 18:35:27.467482 | orchestrator | Friday 04 July 2025 18:31:12 +0000 (0:00:02.141) 0:04:33.756 *********** 2025-07-04 18:35:27.467493 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.467504 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.467516 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.467527 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-04 18:35:27.467537 | orchestrator | 2025-07-04 18:35:27.467548 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-07-04 18:35:27.467559 | orchestrator | Friday 04 July 2025 18:31:12 +0000 (0:00:00.902) 0:04:34.658 *********** 2025-07-04 18:35:27.467570 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-04 18:35:27.467581 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-04 18:35:27.467597 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-04 18:35:27.467608 | orchestrator | 2025-07-04 18:35:27.467619 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-07-04 18:35:27.467630 | orchestrator | Friday 04 July 2025 18:31:13 +0000 (0:00:00.941) 0:04:35.599 *********** 2025-07-04 18:35:27.467641 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-04 18:35:27.467651 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-04 18:35:27.467662 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-04 18:35:27.467673 | orchestrator | 2025-07-04 18:35:27.467683 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-07-04 18:35:27.467694 | orchestrator | Friday 04 July 2025 18:31:14 +0000 (0:00:00.829) 0:04:36.429 *********** 2025-07-04 18:35:27.467705 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:35:27.467717 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:35:27.467727 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:35:27.467739 | orchestrator | 2025-07-04 18:35:27.467750 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-07-04 18:35:27.467761 | orchestrator | Friday 04 July 2025 18:31:15 +0000 (0:00:00.515) 0:04:36.944 *********** 2025-07-04 18:35:27.467772 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:35:27.467782 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:35:27.467793 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:35:27.467804 | orchestrator | 2025-07-04 18:35:27.467815 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-07-04 18:35:27.467826 | orchestrator | Friday 04 July 2025 18:31:15 +0000 (0:00:00.478) 0:04:37.422 *********** 2025-07-04 18:35:27.467837 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-04 18:35:27.467854 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-04 18:35:27.467865 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-04 18:35:27.467884 | orchestrator | 2025-07-04 18:35:27.467895 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-07-04 18:35:27.467906 | orchestrator | Friday 04 July 2025 18:31:16 +0000 (0:00:01.266) 0:04:38.688 *********** 2025-07-04 18:35:27.467917 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-04 18:35:27.467927 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-04 18:35:27.467938 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-04 18:35:27.467949 | orchestrator | 2025-07-04 18:35:27.467960 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-07-04 18:35:27.467971 | orchestrator | Friday 04 July 2025 18:31:18 +0000 (0:00:01.166) 0:04:39.855 *********** 2025-07-04 18:35:27.467981 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-04 18:35:27.467992 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-04 18:35:27.468003 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-04 18:35:27.468014 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-07-04 18:35:27.468025 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-07-04 18:35:27.468035 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-07-04 18:35:27.468046 | orchestrator | 2025-07-04 18:35:27.468057 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-07-04 18:35:27.468068 | orchestrator | Friday 04 July 2025 18:31:22 +0000 (0:00:03.891) 0:04:43.746 *********** 2025-07-04 18:35:27.468079 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.468090 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.468100 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.468111 | orchestrator | 2025-07-04 18:35:27.468122 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-07-04 18:35:27.468133 | orchestrator | Friday 04 July 2025 18:31:22 +0000 (0:00:00.304) 0:04:44.051 *********** 2025-07-04 18:35:27.468207 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.468220 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.468231 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.468242 | orchestrator | 2025-07-04 18:35:27.468254 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-07-04 18:35:27.468265 | orchestrator | Friday 04 July 2025 18:31:22 +0000 (0:00:00.317) 0:04:44.368 *********** 2025-07-04 18:35:27.468275 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:35:27.468286 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:35:27.468297 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:35:27.468308 | orchestrator | 2025-07-04 18:35:27.468319 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-07-04 18:35:27.468330 | orchestrator | Friday 04 July 2025 18:31:24 +0000 (0:00:01.585) 0:04:45.953 *********** 2025-07-04 18:35:27.468342 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-04 18:35:27.468353 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-04 18:35:27.468364 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-04 18:35:27.468376 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-04 18:35:27.468387 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-04 18:35:27.468398 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-04 18:35:27.468408 | orchestrator | 2025-07-04 18:35:27.468419 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-07-04 18:35:27.468430 | orchestrator | Friday 04 July 2025 18:31:27 +0000 (0:00:03.116) 0:04:49.069 *********** 2025-07-04 18:35:27.468452 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-04 18:35:27.468464 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-04 18:35:27.468474 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-04 18:35:27.468485 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-04 18:35:27.468496 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:35:27.468507 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-04 18:35:27.468518 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:35:27.468529 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-04 18:35:27.468539 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:35:27.468550 | orchestrator | 2025-07-04 18:35:27.468561 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-07-04 18:35:27.468572 | orchestrator | Friday 04 July 2025 18:31:30 +0000 (0:00:03.065) 0:04:52.135 *********** 2025-07-04 18:35:27.468582 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.468593 | orchestrator | 2025-07-04 18:35:27.468603 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-07-04 18:35:27.468614 | orchestrator | Friday 04 July 2025 18:31:30 +0000 (0:00:00.118) 0:04:52.253 *********** 2025-07-04 18:35:27.468625 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.468636 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.468647 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.468658 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.468668 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.468679 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.468690 | orchestrator | 2025-07-04 18:35:27.468701 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-07-04 18:35:27.468719 | orchestrator | Friday 04 July 2025 18:31:31 +0000 (0:00:00.792) 0:04:53.046 *********** 2025-07-04 18:35:27.468730 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-04 18:35:27.468741 | orchestrator | 2025-07-04 18:35:27.468752 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-07-04 18:35:27.468764 | orchestrator | Friday 04 July 2025 18:31:32 +0000 (0:00:00.705) 0:04:53.752 *********** 2025-07-04 18:35:27.468774 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.468785 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.468796 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.468807 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.468817 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.468828 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.468838 | orchestrator | 2025-07-04 18:35:27.468849 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-07-04 18:35:27.468860 | orchestrator | Friday 04 July 2025 18:31:32 +0000 (0:00:00.558) 0:04:54.310 *********** 2025-07-04 18:35:27.468872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-04 18:35:27.468883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-04 18:35:27.468912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-04 18:35:27.468924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.468943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.468955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.468967 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-04 18:35:27.468985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-04 18:35:27.468997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469055 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469066 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469098 | orchestrator | 2025-07-04 18:35:27.469113 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-07-04 18:35:27.469125 | orchestrator | Friday 04 July 2025 18:31:36 +0000 (0:00:03.870) 0:04:58.180 *********** 2025-07-04 18:35:27.469136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-04 18:35:27.469182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-04 18:35:27.469195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-04 18:35:27.469214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-04 18:35:27.469225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-04 18:35:27.469243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-04 18:35:27.469260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469272 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469284 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469329 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.469389 | orchestrator | 2025-07-04 18:35:27.469400 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-07-04 18:35:27.469411 | orchestrator | Friday 04 July 2025 18:31:42 +0000 (0:00:06.005) 0:05:04.186 *********** 2025-07-04 18:35:27.469422 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.469433 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.469444 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.469455 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.469465 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.469476 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.469487 | orchestrator | 2025-07-04 18:35:27.469497 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-07-04 18:35:27.469508 | orchestrator | Friday 04 July 2025 18:31:43 +0000 (0:00:01.360) 0:05:05.546 *********** 2025-07-04 18:35:27.469519 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-04 18:35:27.469530 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-04 18:35:27.469540 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-04 18:35:27.469551 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-04 18:35:27.469561 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-04 18:35:27.469572 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-04 18:35:27.469583 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.469593 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-04 18:35:27.469604 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-04 18:35:27.469615 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.469626 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-04 18:35:27.469637 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.469652 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-04 18:35:27.469663 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-04 18:35:27.469674 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-04 18:35:27.469685 | orchestrator | 2025-07-04 18:35:27.469696 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-07-04 18:35:27.469707 | orchestrator | Friday 04 July 2025 18:31:47 +0000 (0:00:03.709) 0:05:09.255 *********** 2025-07-04 18:35:27.469717 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.469728 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.469739 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.469749 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.469760 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.469771 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.469782 | orchestrator | 2025-07-04 18:35:27.469792 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-07-04 18:35:27.469803 | orchestrator | Friday 04 July 2025 18:31:48 +0000 (0:00:00.833) 0:05:10.089 *********** 2025-07-04 18:35:27.469814 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-04 18:35:27.469831 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-04 18:35:27.469848 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-04 18:35:27.469859 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-04 18:35:27.469870 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-04 18:35:27.469881 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-04 18:35:27.469892 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-04 18:35:27.469902 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-04 18:35:27.469913 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-04 18:35:27.469924 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-04 18:35:27.469935 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.469946 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-04 18:35:27.469956 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.469967 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-04 18:35:27.469978 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.469989 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-04 18:35:27.470000 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-04 18:35:27.470011 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-04 18:35:27.470070 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-04 18:35:27.470082 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-04 18:35:27.470094 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-04 18:35:27.470105 | orchestrator | 2025-07-04 18:35:27.470116 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-07-04 18:35:27.470127 | orchestrator | Friday 04 July 2025 18:31:53 +0000 (0:00:05.588) 0:05:15.678 *********** 2025-07-04 18:35:27.470138 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-04 18:35:27.470169 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-04 18:35:27.470180 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-04 18:35:27.470191 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-04 18:35:27.470202 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-04 18:35:27.470213 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-04 18:35:27.470223 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-04 18:35:27.470235 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-04 18:35:27.470246 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-04 18:35:27.470270 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-04 18:35:27.470281 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-04 18:35:27.470292 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-04 18:35:27.470303 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.470314 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-04 18:35:27.470324 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-04 18:35:27.470335 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-04 18:35:27.470346 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.470357 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-04 18:35:27.470367 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.470378 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-04 18:35:27.470389 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-04 18:35:27.470400 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-04 18:35:27.470411 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-04 18:35:27.470429 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-04 18:35:27.470440 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-04 18:35:27.470450 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-04 18:35:27.470461 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-04 18:35:27.470472 | orchestrator | 2025-07-04 18:35:27.470483 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-07-04 18:35:27.470494 | orchestrator | Friday 04 July 2025 18:32:01 +0000 (0:00:07.661) 0:05:23.339 *********** 2025-07-04 18:35:27.470505 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.470516 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.470527 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.470538 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.470548 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.470559 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.470570 | orchestrator | 2025-07-04 18:35:27.470581 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-07-04 18:35:27.470591 | orchestrator | Friday 04 July 2025 18:32:02 +0000 (0:00:00.567) 0:05:23.906 *********** 2025-07-04 18:35:27.470602 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.470613 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.470624 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.470635 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.470645 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.470656 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.470667 | orchestrator | 2025-07-04 18:35:27.470677 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-07-04 18:35:27.470689 | orchestrator | Friday 04 July 2025 18:32:03 +0000 (0:00:00.970) 0:05:24.877 *********** 2025-07-04 18:35:27.470699 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.470710 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.470721 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.470732 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:35:27.470742 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:35:27.470753 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:35:27.470764 | orchestrator | 2025-07-04 18:35:27.470782 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-07-04 18:35:27.470793 | orchestrator | Friday 04 July 2025 18:32:04 +0000 (0:00:01.772) 0:05:26.650 *********** 2025-07-04 18:35:27.470805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-04 18:35:27.470824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-04 18:35:27.470836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.470847 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.470867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-04 18:35:27.470879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-04 18:35:27.470898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-04 18:35:27.470910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-04 18:35:27.470926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.470937 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.470958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.470969 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.470981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-04 18:35:27.470999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.471010 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.471022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-04 18:35:27.471033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.471044 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.471060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-04 18:35:27.471077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-04 18:35:27.471089 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.471100 | orchestrator | 2025-07-04 18:35:27.471111 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-07-04 18:35:27.471122 | orchestrator | Friday 04 July 2025 18:32:06 +0000 (0:00:01.382) 0:05:28.032 *********** 2025-07-04 18:35:27.471133 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-04 18:35:27.471162 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-04 18:35:27.471174 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.471185 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-04 18:35:27.471195 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-04 18:35:27.471214 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.471225 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-04 18:35:27.471235 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-04 18:35:27.471246 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.471257 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-04 18:35:27.471268 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-04 18:35:27.471279 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.471290 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-04 18:35:27.471300 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-04 18:35:27.471311 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.471322 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-04 18:35:27.471333 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-04 18:35:27.471343 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.471354 | orchestrator | 2025-07-04 18:35:27.471365 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-07-04 18:35:27.471376 | orchestrator | Friday 04 July 2025 18:32:06 +0000 (0:00:00.577) 0:05:28.610 *********** 2025-07-04 18:35:27.471387 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471422 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471441 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471579 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-04 18:35:27.471608 | orchestrator | 2025-07-04 18:35:27.471620 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-04 18:35:27.471630 | orchestrator | Friday 04 July 2025 18:32:09 +0000 (0:00:02.698) 0:05:31.309 *********** 2025-07-04 18:35:27.471641 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.471653 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.471664 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.471680 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.471692 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.471702 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.471713 | orchestrator | 2025-07-04 18:35:27.471724 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-04 18:35:27.471735 | orchestrator | Friday 04 July 2025 18:32:10 +0000 (0:00:00.610) 0:05:31.919 *********** 2025-07-04 18:35:27.471746 | orchestrator | 2025-07-04 18:35:27.471757 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-04 18:35:27.471768 | orchestrator | Friday 04 July 2025 18:32:10 +0000 (0:00:00.276) 0:05:32.196 *********** 2025-07-04 18:35:27.471779 | orchestrator | 2025-07-04 18:35:27.471789 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-04 18:35:27.471800 | orchestrator | Friday 04 July 2025 18:32:10 +0000 (0:00:00.136) 0:05:32.332 *********** 2025-07-04 18:35:27.471810 | orchestrator | 2025-07-04 18:35:27.471821 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-04 18:35:27.471832 | orchestrator | Friday 04 July 2025 18:32:10 +0000 (0:00:00.128) 0:05:32.461 *********** 2025-07-04 18:35:27.471843 | orchestrator | 2025-07-04 18:35:27.471858 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-04 18:35:27.471877 | orchestrator | Friday 04 July 2025 18:32:10 +0000 (0:00:00.128) 0:05:32.590 *********** 2025-07-04 18:35:27.471905 | orchestrator | 2025-07-04 18:35:27.471925 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-04 18:35:27.471942 | orchestrator | Friday 04 July 2025 18:32:10 +0000 (0:00:00.127) 0:05:32.717 *********** 2025-07-04 18:35:27.471960 | orchestrator | 2025-07-04 18:35:27.471974 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-07-04 18:35:27.471990 | orchestrator | Friday 04 July 2025 18:32:11 +0000 (0:00:00.119) 0:05:32.836 *********** 2025-07-04 18:35:27.472007 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.472024 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:35:27.472042 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:35:27.472060 | orchestrator | 2025-07-04 18:35:27.472078 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-07-04 18:35:27.472096 | orchestrator | Friday 04 July 2025 18:32:23 +0000 (0:00:12.047) 0:05:44.884 *********** 2025-07-04 18:35:27.472114 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.472128 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:35:27.472139 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:35:27.472178 | orchestrator | 2025-07-04 18:35:27.472189 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-07-04 18:35:27.472200 | orchestrator | Friday 04 July 2025 18:32:41 +0000 (0:00:17.895) 0:06:02.780 *********** 2025-07-04 18:35:27.472210 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:35:27.472221 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:35:27.472232 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:35:27.472243 | orchestrator | 2025-07-04 18:35:27.472254 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-07-04 18:35:27.472264 | orchestrator | Friday 04 July 2025 18:33:05 +0000 (0:00:24.857) 0:06:27.637 *********** 2025-07-04 18:35:27.472275 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:35:27.472286 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:35:27.472297 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:35:27.472308 | orchestrator | 2025-07-04 18:35:27.472318 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-07-04 18:35:27.472340 | orchestrator | Friday 04 July 2025 18:33:38 +0000 (0:00:32.357) 0:06:59.995 *********** 2025-07-04 18:35:27.472351 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-07-04 18:35:27.472362 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-07-04 18:35:27.472373 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-07-04 18:35:27.472384 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:35:27.472395 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:35:27.472406 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:35:27.472416 | orchestrator | 2025-07-04 18:35:27.472427 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-07-04 18:35:27.472438 | orchestrator | Friday 04 July 2025 18:33:44 +0000 (0:00:06.444) 0:07:06.440 *********** 2025-07-04 18:35:27.472449 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:35:27.472460 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:35:27.472477 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:35:27.472488 | orchestrator | 2025-07-04 18:35:27.472499 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-07-04 18:35:27.472509 | orchestrator | Friday 04 July 2025 18:33:45 +0000 (0:00:00.825) 0:07:07.266 *********** 2025-07-04 18:35:27.472520 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:35:27.472531 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:35:27.472541 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:35:27.472552 | orchestrator | 2025-07-04 18:35:27.472562 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-07-04 18:35:27.472573 | orchestrator | Friday 04 July 2025 18:34:15 +0000 (0:00:30.320) 0:07:37.586 *********** 2025-07-04 18:35:27.472584 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.472595 | orchestrator | 2025-07-04 18:35:27.472605 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-07-04 18:35:27.472616 | orchestrator | Friday 04 July 2025 18:34:16 +0000 (0:00:00.174) 0:07:37.760 *********** 2025-07-04 18:35:27.472627 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.472638 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.472648 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.472659 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.472670 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.472680 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-07-04 18:35:27.472692 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:35:27.472703 | orchestrator | 2025-07-04 18:35:27.472723 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-07-04 18:35:27.472735 | orchestrator | Friday 04 July 2025 18:34:38 +0000 (0:00:22.660) 0:08:00.421 *********** 2025-07-04 18:35:27.472746 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.472756 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.472767 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.472778 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.472789 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.472799 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.472810 | orchestrator | 2025-07-04 18:35:27.472821 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-07-04 18:35:27.472832 | orchestrator | Friday 04 July 2025 18:34:47 +0000 (0:00:09.296) 0:08:09.718 *********** 2025-07-04 18:35:27.472843 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.472853 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.472864 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.472875 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.472885 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.472896 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-07-04 18:35:27.472914 | orchestrator | 2025-07-04 18:35:27.472924 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-04 18:35:27.472935 | orchestrator | Friday 04 July 2025 18:34:52 +0000 (0:00:04.166) 0:08:13.884 *********** 2025-07-04 18:35:27.472947 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:35:27.472958 | orchestrator | 2025-07-04 18:35:27.472968 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-04 18:35:27.472979 | orchestrator | Friday 04 July 2025 18:35:05 +0000 (0:00:13.354) 0:08:27.238 *********** 2025-07-04 18:35:27.472989 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:35:27.473000 | orchestrator | 2025-07-04 18:35:27.473011 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-07-04 18:35:27.473022 | orchestrator | Friday 04 July 2025 18:35:06 +0000 (0:00:01.364) 0:08:28.603 *********** 2025-07-04 18:35:27.473032 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.473043 | orchestrator | 2025-07-04 18:35:27.473054 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-07-04 18:35:27.473064 | orchestrator | Friday 04 July 2025 18:35:08 +0000 (0:00:01.312) 0:08:29.916 *********** 2025-07-04 18:35:27.473075 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:35:27.473086 | orchestrator | 2025-07-04 18:35:27.473096 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-07-04 18:35:27.473107 | orchestrator | Friday 04 July 2025 18:35:19 +0000 (0:00:11.659) 0:08:41.576 *********** 2025-07-04 18:35:27.473118 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:35:27.473129 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:35:27.473140 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:35:27.473187 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:35:27.473206 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:35:27.473223 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:35:27.473239 | orchestrator | 2025-07-04 18:35:27.473250 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-07-04 18:35:27.473261 | orchestrator | 2025-07-04 18:35:27.473271 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-07-04 18:35:27.473282 | orchestrator | Friday 04 July 2025 18:35:21 +0000 (0:00:01.871) 0:08:43.447 *********** 2025-07-04 18:35:27.473293 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:35:27.473304 | orchestrator | changed: [testbed-node-1] 2025-07-04 18:35:27.473315 | orchestrator | changed: [testbed-node-2] 2025-07-04 18:35:27.473326 | orchestrator | 2025-07-04 18:35:27.473337 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-07-04 18:35:27.473348 | orchestrator | 2025-07-04 18:35:27.473359 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-07-04 18:35:27.473369 | orchestrator | Friday 04 July 2025 18:35:22 +0000 (0:00:01.132) 0:08:44.579 *********** 2025-07-04 18:35:27.473380 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.473390 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.473401 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.473412 | orchestrator | 2025-07-04 18:35:27.473422 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-07-04 18:35:27.473433 | orchestrator | 2025-07-04 18:35:27.473444 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-07-04 18:35:27.473469 | orchestrator | Friday 04 July 2025 18:35:23 +0000 (0:00:00.489) 0:08:45.068 *********** 2025-07-04 18:35:27.473488 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-07-04 18:35:27.473506 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-04 18:35:27.473524 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-04 18:35:27.473541 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-07-04 18:35:27.473558 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-07-04 18:35:27.473575 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-07-04 18:35:27.473605 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:35:27.473630 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-07-04 18:35:27.473649 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-04 18:35:27.473669 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-04 18:35:27.473680 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-07-04 18:35:27.473691 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-07-04 18:35:27.473702 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-07-04 18:35:27.473712 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-07-04 18:35:27.473723 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-04 18:35:27.473734 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-04 18:35:27.473753 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-07-04 18:35:27.473765 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-07-04 18:35:27.473775 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-07-04 18:35:27.473786 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:35:27.473796 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-07-04 18:35:27.473807 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-04 18:35:27.473818 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:35:27.473829 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-04 18:35:27.473842 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-07-04 18:35:27.473852 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-07-04 18:35:27.473863 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-07-04 18:35:27.473873 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-07-04 18:35:27.473884 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-04 18:35:27.473894 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-04 18:35:27.473905 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-07-04 18:35:27.473916 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-07-04 18:35:27.473926 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-07-04 18:35:27.473937 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.473947 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.473958 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-07-04 18:35:27.473969 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-04 18:35:27.473979 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-04 18:35:27.473990 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-07-04 18:35:27.474000 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-07-04 18:35:27.474011 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-07-04 18:35:27.474059 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.474070 | orchestrator | 2025-07-04 18:35:27.474081 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-07-04 18:35:27.474093 | orchestrator | 2025-07-04 18:35:27.474103 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-07-04 18:35:27.474114 | orchestrator | Friday 04 July 2025 18:35:24 +0000 (0:00:01.326) 0:08:46.395 *********** 2025-07-04 18:35:27.474125 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-07-04 18:35:27.474135 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-07-04 18:35:27.474163 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.474174 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-07-04 18:35:27.474185 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-07-04 18:35:27.474204 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.474215 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-07-04 18:35:27.474226 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-07-04 18:35:27.474237 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.474247 | orchestrator | 2025-07-04 18:35:27.474258 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-07-04 18:35:27.474269 | orchestrator | 2025-07-04 18:35:27.474282 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-07-04 18:35:27.474300 | orchestrator | Friday 04 July 2025 18:35:25 +0000 (0:00:00.758) 0:08:47.153 *********** 2025-07-04 18:35:27.474316 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.474327 | orchestrator | 2025-07-04 18:35:27.474338 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-07-04 18:35:27.474349 | orchestrator | 2025-07-04 18:35:27.474359 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-07-04 18:35:27.474370 | orchestrator | Friday 04 July 2025 18:35:26 +0000 (0:00:00.653) 0:08:47.806 *********** 2025-07-04 18:35:27.474381 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:35:27.474391 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:35:27.474402 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:35:27.474420 | orchestrator | 2025-07-04 18:35:27.474438 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:35:27.474465 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:35:27.474486 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-07-04 18:35:27.474506 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-04 18:35:27.474519 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-04 18:35:27.474530 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-04 18:35:27.474540 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-07-04 18:35:27.474560 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-07-04 18:35:27.474571 | orchestrator | 2025-07-04 18:35:27.474582 | orchestrator | 2025-07-04 18:35:27.474593 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:35:27.474604 | orchestrator | Friday 04 July 2025 18:35:26 +0000 (0:00:00.409) 0:08:48.216 *********** 2025-07-04 18:35:27.474614 | orchestrator | =============================================================================== 2025-07-04 18:35:27.474625 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 32.36s 2025-07-04 18:35:27.474636 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.17s 2025-07-04 18:35:27.474647 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 30.32s 2025-07-04 18:35:27.474657 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.86s 2025-07-04 18:35:27.474668 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.66s 2025-07-04 18:35:27.474679 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.66s 2025-07-04 18:35:27.474689 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.23s 2025-07-04 18:35:27.474700 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.07s 2025-07-04 18:35:27.474719 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.90s 2025-07-04 18:35:27.474729 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.48s 2025-07-04 18:35:27.474740 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.35s 2025-07-04 18:35:27.474751 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.25s 2025-07-04 18:35:27.474761 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.16s 2025-07-04 18:35:27.474772 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.05s 2025-07-04 18:35:27.474782 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.80s 2025-07-04 18:35:27.474793 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.66s 2025-07-04 18:35:27.474804 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.89s 2025-07-04 18:35:27.474815 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.68s 2025-07-04 18:35:27.474825 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.30s 2025-07-04 18:35:27.474836 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.66s 2025-07-04 18:35:30.499853 | orchestrator | 2025-07-04 18:35:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:35:33.544730 | orchestrator | 2025-07-04 18:35:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:35:36.592534 | orchestrator | 2025-07-04 18:35:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:35:39.634135 | orchestrator | 2025-07-04 18:35:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:35:42.678583 | orchestrator | 2025-07-04 18:35:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:35:45.725931 | orchestrator | 2025-07-04 18:35:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:35:48.771614 | orchestrator | 2025-07-04 18:35:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:35:51.813400 | orchestrator | 2025-07-04 18:35:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:35:54.852829 | orchestrator | 2025-07-04 18:35:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:35:57.895309 | orchestrator | 2025-07-04 18:35:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:36:00.937984 | orchestrator | 2025-07-04 18:36:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:36:03.978855 | orchestrator | 2025-07-04 18:36:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:36:07.028411 | orchestrator | 2025-07-04 18:36:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:36:10.063871 | orchestrator | 2025-07-04 18:36:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:36:13.108428 | orchestrator | 2025-07-04 18:36:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:36:16.147455 | orchestrator | 2025-07-04 18:36:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:36:19.192873 | orchestrator | 2025-07-04 18:36:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:36:22.244240 | orchestrator | 2025-07-04 18:36:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:36:25.287965 | orchestrator | 2025-07-04 18:36:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-04 18:36:28.326947 | orchestrator | 2025-07-04 18:36:28.574941 | orchestrator | 2025-07-04 18:36:28.582224 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Jul 4 18:36:28 UTC 2025 2025-07-04 18:36:28.582317 | orchestrator | 2025-07-04 18:36:29.066822 | orchestrator | ok: Runtime: 0:37:34.862482 2025-07-04 18:36:29.410183 | 2025-07-04 18:36:29.410347 | TASK [Bootstrap services] 2025-07-04 18:36:30.162945 | orchestrator | 2025-07-04 18:36:30.163138 | orchestrator | # BOOTSTRAP 2025-07-04 18:36:30.163160 | orchestrator | 2025-07-04 18:36:30.163174 | orchestrator | + set -e 2025-07-04 18:36:30.163187 | orchestrator | + echo 2025-07-04 18:36:30.163202 | orchestrator | + echo '# BOOTSTRAP' 2025-07-04 18:36:30.163220 | orchestrator | + echo 2025-07-04 18:36:30.163267 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-07-04 18:36:30.173253 | orchestrator | + set -e 2025-07-04 18:36:30.173326 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-07-04 18:36:33.433976 | orchestrator | 2025-07-04 18:36:33 | INFO  | It takes a moment until task 1e72da27-8a3f-4add-a58c-a1be80bd0a91 (flavor-manager) has been started and output is visible here. 2025-07-04 18:36:37.900154 | orchestrator | 2025-07-04 18:36:37 | INFO  | Flavor SCS-1V-4 created 2025-07-04 18:36:38.219887 | orchestrator | 2025-07-04 18:36:38 | INFO  | Flavor SCS-2V-8 created 2025-07-04 18:36:38.544536 | orchestrator | 2025-07-04 18:36:38 | INFO  | Flavor SCS-4V-16 created 2025-07-04 18:36:38.730653 | orchestrator | 2025-07-04 18:36:38 | INFO  | Flavor SCS-8V-32 created 2025-07-04 18:36:38.860733 | orchestrator | 2025-07-04 18:36:38 | INFO  | Flavor SCS-1V-2 created 2025-07-04 18:36:39.017095 | orchestrator | 2025-07-04 18:36:39 | INFO  | Flavor SCS-2V-4 created 2025-07-04 18:36:39.151166 | orchestrator | 2025-07-04 18:36:39 | INFO  | Flavor SCS-4V-8 created 2025-07-04 18:36:39.277127 | orchestrator | 2025-07-04 18:36:39 | INFO  | Flavor SCS-8V-16 created 2025-07-04 18:36:39.410197 | orchestrator | 2025-07-04 18:36:39 | INFO  | Flavor SCS-16V-32 created 2025-07-04 18:36:39.553673 | orchestrator | 2025-07-04 18:36:39 | INFO  | Flavor SCS-1V-8 created 2025-07-04 18:36:39.685435 | orchestrator | 2025-07-04 18:36:39 | INFO  | Flavor SCS-2V-16 created 2025-07-04 18:36:39.835816 | orchestrator | 2025-07-04 18:36:39 | INFO  | Flavor SCS-4V-32 created 2025-07-04 18:36:39.982100 | orchestrator | 2025-07-04 18:36:39 | INFO  | Flavor SCS-1L-1 created 2025-07-04 18:36:40.129410 | orchestrator | 2025-07-04 18:36:40 | INFO  | Flavor SCS-2V-4-20s created 2025-07-04 18:36:40.277689 | orchestrator | 2025-07-04 18:36:40 | INFO  | Flavor SCS-4V-16-100s created 2025-07-04 18:36:40.419966 | orchestrator | 2025-07-04 18:36:40 | INFO  | Flavor SCS-1V-4-10 created 2025-07-04 18:36:40.529676 | orchestrator | 2025-07-04 18:36:40 | INFO  | Flavor SCS-2V-8-20 created 2025-07-04 18:36:40.654902 | orchestrator | 2025-07-04 18:36:40 | INFO  | Flavor SCS-4V-16-50 created 2025-07-04 18:36:40.827724 | orchestrator | 2025-07-04 18:36:40 | INFO  | Flavor SCS-8V-32-100 created 2025-07-04 18:36:40.952643 | orchestrator | 2025-07-04 18:36:40 | INFO  | Flavor SCS-1V-2-5 created 2025-07-04 18:36:41.089223 | orchestrator | 2025-07-04 18:36:41 | INFO  | Flavor SCS-2V-4-10 created 2025-07-04 18:36:41.210999 | orchestrator | 2025-07-04 18:36:41 | INFO  | Flavor SCS-4V-8-20 created 2025-07-04 18:36:41.330663 | orchestrator | 2025-07-04 18:36:41 | INFO  | Flavor SCS-8V-16-50 created 2025-07-04 18:36:41.491105 | orchestrator | 2025-07-04 18:36:41 | INFO  | Flavor SCS-16V-32-100 created 2025-07-04 18:36:41.632122 | orchestrator | 2025-07-04 18:36:41 | INFO  | Flavor SCS-1V-8-20 created 2025-07-04 18:36:41.741371 | orchestrator | 2025-07-04 18:36:41 | INFO  | Flavor SCS-2V-16-50 created 2025-07-04 18:36:41.890752 | orchestrator | 2025-07-04 18:36:41 | INFO  | Flavor SCS-4V-32-100 created 2025-07-04 18:36:42.039731 | orchestrator | 2025-07-04 18:36:42 | INFO  | Flavor SCS-1L-1-5 created 2025-07-04 18:36:44.286707 | orchestrator | 2025-07-04 18:36:44 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-07-04 18:36:44.291395 | orchestrator | Registering Redlock._acquired_script 2025-07-04 18:36:44.291461 | orchestrator | Registering Redlock._extend_script 2025-07-04 18:36:44.291494 | orchestrator | Registering Redlock._release_script 2025-07-04 18:36:44.352599 | orchestrator | 2025-07-04 18:36:44 | INFO  | Task 10e801ee-dd70-4c28-927c-c851e01d4252 (bootstrap-basic) was prepared for execution. 2025-07-04 18:36:44.352723 | orchestrator | 2025-07-04 18:36:44 | INFO  | It takes a moment until task 10e801ee-dd70-4c28-927c-c851e01d4252 (bootstrap-basic) has been started and output is visible here. 2025-07-04 18:36:48.855027 | orchestrator | 2025-07-04 18:36:48.855906 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-07-04 18:36:48.858664 | orchestrator | 2025-07-04 18:36:48.859252 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-04 18:36:48.861188 | orchestrator | Friday 04 July 2025 18:36:48 +0000 (0:00:00.083) 0:00:00.083 *********** 2025-07-04 18:36:50.735972 | orchestrator | ok: [localhost] 2025-07-04 18:36:50.736282 | orchestrator | 2025-07-04 18:36:50.737224 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-07-04 18:36:50.738200 | orchestrator | Friday 04 July 2025 18:36:50 +0000 (0:00:01.879) 0:00:01.963 *********** 2025-07-04 18:36:58.851741 | orchestrator | ok: [localhost] 2025-07-04 18:36:58.851882 | orchestrator | 2025-07-04 18:36:58.853906 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-07-04 18:36:58.855136 | orchestrator | Friday 04 July 2025 18:36:58 +0000 (0:00:08.114) 0:00:10.077 *********** 2025-07-04 18:37:06.379853 | orchestrator | changed: [localhost] 2025-07-04 18:37:06.381542 | orchestrator | 2025-07-04 18:37:06.383563 | orchestrator | TASK [Get volume type local] *************************************************** 2025-07-04 18:37:06.384484 | orchestrator | Friday 04 July 2025 18:37:06 +0000 (0:00:07.530) 0:00:17.608 *********** 2025-07-04 18:37:12.810134 | orchestrator | ok: [localhost] 2025-07-04 18:37:12.810763 | orchestrator | 2025-07-04 18:37:12.812407 | orchestrator | TASK [Create volume type local] ************************************************ 2025-07-04 18:37:12.814127 | orchestrator | Friday 04 July 2025 18:37:12 +0000 (0:00:06.430) 0:00:24.038 *********** 2025-07-04 18:37:19.990801 | orchestrator | changed: [localhost] 2025-07-04 18:37:19.991902 | orchestrator | 2025-07-04 18:37:19.992663 | orchestrator | TASK [Create public network] *************************************************** 2025-07-04 18:37:19.993579 | orchestrator | Friday 04 July 2025 18:37:19 +0000 (0:00:07.178) 0:00:31.217 *********** 2025-07-04 18:37:27.132080 | orchestrator | changed: [localhost] 2025-07-04 18:37:27.134799 | orchestrator | 2025-07-04 18:37:27.134906 | orchestrator | TASK [Set public network to default] ******************************************* 2025-07-04 18:37:27.136012 | orchestrator | Friday 04 July 2025 18:37:27 +0000 (0:00:07.142) 0:00:38.360 *********** 2025-07-04 18:37:34.231993 | orchestrator | changed: [localhost] 2025-07-04 18:37:34.232111 | orchestrator | 2025-07-04 18:37:34.233867 | orchestrator | TASK [Create public subnet] **************************************************** 2025-07-04 18:37:34.235757 | orchestrator | Friday 04 July 2025 18:37:34 +0000 (0:00:07.099) 0:00:45.459 *********** 2025-07-04 18:37:39.865263 | orchestrator | changed: [localhost] 2025-07-04 18:37:39.865976 | orchestrator | 2025-07-04 18:37:39.868451 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-07-04 18:37:39.870285 | orchestrator | Friday 04 July 2025 18:37:39 +0000 (0:00:05.634) 0:00:51.094 *********** 2025-07-04 18:37:43.781254 | orchestrator | changed: [localhost] 2025-07-04 18:37:43.781368 | orchestrator | 2025-07-04 18:37:43.781388 | orchestrator | TASK [Create manager role] ***************************************************** 2025-07-04 18:37:43.783387 | orchestrator | Friday 04 July 2025 18:37:43 +0000 (0:00:03.913) 0:00:55.007 *********** 2025-07-04 18:37:47.407660 | orchestrator | ok: [localhost] 2025-07-04 18:37:47.407895 | orchestrator | 2025-07-04 18:37:47.408253 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:37:47.409897 | orchestrator | 2025-07-04 18:37:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 18:37:47.409988 | orchestrator | 2025-07-04 18:37:47 | INFO  | Please wait and do not abort execution. 2025-07-04 18:37:47.410649 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:37:47.410939 | orchestrator | 2025-07-04 18:37:47.411559 | orchestrator | 2025-07-04 18:37:47.412050 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:37:47.413414 | orchestrator | Friday 04 July 2025 18:37:47 +0000 (0:00:03.629) 0:00:58.636 *********** 2025-07-04 18:37:47.413638 | orchestrator | =============================================================================== 2025-07-04 18:37:47.414271 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.11s 2025-07-04 18:37:47.414685 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.53s 2025-07-04 18:37:47.414986 | orchestrator | Create volume type local ------------------------------------------------ 7.18s 2025-07-04 18:37:47.415315 | orchestrator | Create public network --------------------------------------------------- 7.14s 2025-07-04 18:37:47.415850 | orchestrator | Set public network to default ------------------------------------------- 7.10s 2025-07-04 18:37:47.416215 | orchestrator | Get volume type local --------------------------------------------------- 6.43s 2025-07-04 18:37:47.416515 | orchestrator | Create public subnet ---------------------------------------------------- 5.63s 2025-07-04 18:37:47.416975 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.91s 2025-07-04 18:37:47.417826 | orchestrator | Create manager role ----------------------------------------------------- 3.63s 2025-07-04 18:37:47.419594 | orchestrator | Gathering Facts --------------------------------------------------------- 1.88s 2025-07-04 18:37:49.734149 | orchestrator | 2025-07-04 18:37:49 | INFO  | It takes a moment until task 1cb3cdb2-8e25-4af7-8409-8cf5148f5f04 (image-manager) has been started and output is visible here. 2025-07-04 18:37:53.216992 | orchestrator | 2025-07-04 18:37:53 | INFO  | Processing image 'Cirros 0.6.2' 2025-07-04 18:37:53.439372 | orchestrator | 2025-07-04 18:37:53 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-07-04 18:37:53.440126 | orchestrator | 2025-07-04 18:37:53 | INFO  | Importing image Cirros 0.6.2 2025-07-04 18:37:53.441597 | orchestrator | 2025-07-04 18:37:53 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-04 18:37:55.151560 | orchestrator | 2025-07-04 18:37:55 | INFO  | Waiting for image to leave queued state... 2025-07-04 18:37:57.192267 | orchestrator | 2025-07-04 18:37:57 | INFO  | Waiting for import to complete... 2025-07-04 18:38:07.533130 | orchestrator | 2025-07-04 18:38:07 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-07-04 18:38:07.964931 | orchestrator | 2025-07-04 18:38:07 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-07-04 18:38:07.966513 | orchestrator | 2025-07-04 18:38:07 | INFO  | Setting internal_version = 0.6.2 2025-07-04 18:38:07.967587 | orchestrator | 2025-07-04 18:38:07 | INFO  | Setting image_original_user = cirros 2025-07-04 18:38:07.969602 | orchestrator | 2025-07-04 18:38:07 | INFO  | Adding tag os:cirros 2025-07-04 18:38:08.229885 | orchestrator | 2025-07-04 18:38:08 | INFO  | Setting property architecture: x86_64 2025-07-04 18:38:08.521054 | orchestrator | 2025-07-04 18:38:08 | INFO  | Setting property hw_disk_bus: scsi 2025-07-04 18:38:08.762283 | orchestrator | 2025-07-04 18:38:08 | INFO  | Setting property hw_rng_model: virtio 2025-07-04 18:38:08.998730 | orchestrator | 2025-07-04 18:38:08 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-04 18:38:09.236066 | orchestrator | 2025-07-04 18:38:09 | INFO  | Setting property hw_watchdog_action: reset 2025-07-04 18:38:09.418897 | orchestrator | 2025-07-04 18:38:09 | INFO  | Setting property hypervisor_type: qemu 2025-07-04 18:38:09.661354 | orchestrator | 2025-07-04 18:38:09 | INFO  | Setting property os_distro: cirros 2025-07-04 18:38:09.877340 | orchestrator | 2025-07-04 18:38:09 | INFO  | Setting property replace_frequency: never 2025-07-04 18:38:10.113124 | orchestrator | 2025-07-04 18:38:10 | INFO  | Setting property uuid_validity: none 2025-07-04 18:38:10.308133 | orchestrator | 2025-07-04 18:38:10 | INFO  | Setting property provided_until: none 2025-07-04 18:38:10.525085 | orchestrator | 2025-07-04 18:38:10 | INFO  | Setting property image_description: Cirros 2025-07-04 18:38:10.707999 | orchestrator | 2025-07-04 18:38:10 | INFO  | Setting property image_name: Cirros 2025-07-04 18:38:10.945065 | orchestrator | 2025-07-04 18:38:10 | INFO  | Setting property internal_version: 0.6.2 2025-07-04 18:38:11.179807 | orchestrator | 2025-07-04 18:38:11 | INFO  | Setting property image_original_user: cirros 2025-07-04 18:38:11.420908 | orchestrator | 2025-07-04 18:38:11 | INFO  | Setting property os_version: 0.6.2 2025-07-04 18:38:11.612385 | orchestrator | 2025-07-04 18:38:11 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-04 18:38:11.854834 | orchestrator | 2025-07-04 18:38:11 | INFO  | Setting property image_build_date: 2023-05-30 2025-07-04 18:38:12.099076 | orchestrator | 2025-07-04 18:38:12 | INFO  | Checking status of 'Cirros 0.6.2' 2025-07-04 18:38:12.100234 | orchestrator | 2025-07-04 18:38:12 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-07-04 18:38:12.101409 | orchestrator | 2025-07-04 18:38:12 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-07-04 18:38:12.326859 | orchestrator | 2025-07-04 18:38:12 | INFO  | Processing image 'Cirros 0.6.3' 2025-07-04 18:38:12.541358 | orchestrator | 2025-07-04 18:38:12 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-07-04 18:38:12.542280 | orchestrator | 2025-07-04 18:38:12 | INFO  | Importing image Cirros 0.6.3 2025-07-04 18:38:12.543206 | orchestrator | 2025-07-04 18:38:12 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-04 18:38:13.761049 | orchestrator | 2025-07-04 18:38:13 | INFO  | Waiting for image to leave queued state... 2025-07-04 18:38:15.807661 | orchestrator | 2025-07-04 18:38:15 | INFO  | Waiting for import to complete... 2025-07-04 18:38:25.933211 | orchestrator | 2025-07-04 18:38:25 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-07-04 18:38:26.202785 | orchestrator | 2025-07-04 18:38:26 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-07-04 18:38:26.203695 | orchestrator | 2025-07-04 18:38:26 | INFO  | Setting internal_version = 0.6.3 2025-07-04 18:38:26.205314 | orchestrator | 2025-07-04 18:38:26 | INFO  | Setting image_original_user = cirros 2025-07-04 18:38:26.205935 | orchestrator | 2025-07-04 18:38:26 | INFO  | Adding tag os:cirros 2025-07-04 18:38:26.448929 | orchestrator | 2025-07-04 18:38:26 | INFO  | Setting property architecture: x86_64 2025-07-04 18:38:26.661292 | orchestrator | 2025-07-04 18:38:26 | INFO  | Setting property hw_disk_bus: scsi 2025-07-04 18:38:26.877858 | orchestrator | 2025-07-04 18:38:26 | INFO  | Setting property hw_rng_model: virtio 2025-07-04 18:38:27.084701 | orchestrator | 2025-07-04 18:38:27 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-04 18:38:27.292949 | orchestrator | 2025-07-04 18:38:27 | INFO  | Setting property hw_watchdog_action: reset 2025-07-04 18:38:27.524258 | orchestrator | 2025-07-04 18:38:27 | INFO  | Setting property hypervisor_type: qemu 2025-07-04 18:38:27.721524 | orchestrator | 2025-07-04 18:38:27 | INFO  | Setting property os_distro: cirros 2025-07-04 18:38:27.927272 | orchestrator | 2025-07-04 18:38:27 | INFO  | Setting property replace_frequency: never 2025-07-04 18:38:28.303363 | orchestrator | 2025-07-04 18:38:28 | INFO  | Setting property uuid_validity: none 2025-07-04 18:38:28.494743 | orchestrator | 2025-07-04 18:38:28 | INFO  | Setting property provided_until: none 2025-07-04 18:38:28.732475 | orchestrator | 2025-07-04 18:38:28 | INFO  | Setting property image_description: Cirros 2025-07-04 18:38:28.957622 | orchestrator | 2025-07-04 18:38:28 | INFO  | Setting property image_name: Cirros 2025-07-04 18:38:29.156372 | orchestrator | 2025-07-04 18:38:29 | INFO  | Setting property internal_version: 0.6.3 2025-07-04 18:38:29.351993 | orchestrator | 2025-07-04 18:38:29 | INFO  | Setting property image_original_user: cirros 2025-07-04 18:38:29.601561 | orchestrator | 2025-07-04 18:38:29 | INFO  | Setting property os_version: 0.6.3 2025-07-04 18:38:29.817763 | orchestrator | 2025-07-04 18:38:29 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-04 18:38:30.063939 | orchestrator | 2025-07-04 18:38:30 | INFO  | Setting property image_build_date: 2024-09-26 2025-07-04 18:38:30.276781 | orchestrator | 2025-07-04 18:38:30 | INFO  | Checking status of 'Cirros 0.6.3' 2025-07-04 18:38:30.277023 | orchestrator | 2025-07-04 18:38:30 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-07-04 18:38:30.278084 | orchestrator | 2025-07-04 18:38:30 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-07-04 18:38:31.495908 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-07-04 18:38:33.467159 | orchestrator | 2025-07-04 18:38:33 | INFO  | date: 2025-07-04 2025-07-04 18:38:33.468172 | orchestrator | 2025-07-04 18:38:33 | INFO  | image: octavia-amphora-haproxy-2024.2.20250704.qcow2 2025-07-04 18:38:33.468223 | orchestrator | 2025-07-04 18:38:33 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250704.qcow2 2025-07-04 18:38:33.468267 | orchestrator | 2025-07-04 18:38:33 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250704.qcow2.CHECKSUM 2025-07-04 18:38:33.490823 | orchestrator | 2025-07-04 18:38:33 | INFO  | checksum: 773b2f83a1d8ba9e55a0bc654bf2e5a9ac5d7b578d24bdf0ae4115d0f924de33 2025-07-04 18:38:33.558636 | orchestrator | 2025-07-04 18:38:33 | INFO  | It takes a moment until task 975e40fb-bb3c-49b3-ac37-7b3aa66f5cec (image-manager) has been started and output is visible here. 2025-07-04 18:38:33.806930 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-07-04 18:38:33.807125 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-07-04 18:38:36.103958 | orchestrator | 2025-07-04 18:38:36 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-07-04' 2025-07-04 18:38:36.119695 | orchestrator | 2025-07-04 18:38:36 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250704.qcow2: 200 2025-07-04 18:38:36.120233 | orchestrator | 2025-07-04 18:38:36 | INFO  | Importing image OpenStack Octavia Amphora 2025-07-04 2025-07-04 18:38:36.120669 | orchestrator | 2025-07-04 18:38:36 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250704.qcow2 2025-07-04 18:38:37.613439 | orchestrator | 2025-07-04 18:38:37 | INFO  | Waiting for image to leave queued state... 2025-07-04 18:38:39.701919 | orchestrator | 2025-07-04 18:38:39 | INFO  | Waiting for import to complete... 2025-07-04 18:38:49.785396 | orchestrator | 2025-07-04 18:38:49 | INFO  | Waiting for import to complete... 2025-07-04 18:38:59.872913 | orchestrator | 2025-07-04 18:38:59 | INFO  | Waiting for import to complete... 2025-07-04 18:39:09.975101 | orchestrator | 2025-07-04 18:39:09 | INFO  | Waiting for import to complete... 2025-07-04 18:39:20.104259 | orchestrator | 2025-07-04 18:39:20 | INFO  | Waiting for import to complete... 2025-07-04 18:39:30.210079 | orchestrator | 2025-07-04 18:39:30 | INFO  | Import of 'OpenStack Octavia Amphora 2025-07-04' successfully completed, reloading images 2025-07-04 18:39:30.525229 | orchestrator | 2025-07-04 18:39:30 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-07-04' 2025-07-04 18:39:30.525641 | orchestrator | 2025-07-04 18:39:30 | INFO  | Setting internal_version = 2025-07-04 2025-07-04 18:39:30.526852 | orchestrator | 2025-07-04 18:39:30 | INFO  | Setting image_original_user = ubuntu 2025-07-04 18:39:30.528008 | orchestrator | 2025-07-04 18:39:30 | INFO  | Adding tag amphora 2025-07-04 18:39:30.796887 | orchestrator | 2025-07-04 18:39:30 | INFO  | Adding tag os:ubuntu 2025-07-04 18:39:30.987882 | orchestrator | 2025-07-04 18:39:30 | INFO  | Setting property architecture: x86_64 2025-07-04 18:39:31.171142 | orchestrator | 2025-07-04 18:39:31 | INFO  | Setting property hw_disk_bus: scsi 2025-07-04 18:39:31.379258 | orchestrator | 2025-07-04 18:39:31 | INFO  | Setting property hw_rng_model: virtio 2025-07-04 18:39:31.601577 | orchestrator | 2025-07-04 18:39:31 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-04 18:39:31.840210 | orchestrator | 2025-07-04 18:39:31 | INFO  | Setting property hw_watchdog_action: reset 2025-07-04 18:39:32.050258 | orchestrator | 2025-07-04 18:39:32 | INFO  | Setting property hypervisor_type: qemu 2025-07-04 18:39:32.264250 | orchestrator | 2025-07-04 18:39:32 | INFO  | Setting property os_distro: ubuntu 2025-07-04 18:39:32.464133 | orchestrator | 2025-07-04 18:39:32 | INFO  | Setting property replace_frequency: quarterly 2025-07-04 18:39:32.662902 | orchestrator | 2025-07-04 18:39:32 | INFO  | Setting property uuid_validity: last-1 2025-07-04 18:39:32.899533 | orchestrator | 2025-07-04 18:39:32 | INFO  | Setting property provided_until: none 2025-07-04 18:39:33.119941 | orchestrator | 2025-07-04 18:39:33 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-07-04 18:39:33.359092 | orchestrator | 2025-07-04 18:39:33 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-07-04 18:39:33.587119 | orchestrator | 2025-07-04 18:39:33 | INFO  | Setting property internal_version: 2025-07-04 2025-07-04 18:39:33.798639 | orchestrator | 2025-07-04 18:39:33 | INFO  | Setting property image_original_user: ubuntu 2025-07-04 18:39:34.032563 | orchestrator | 2025-07-04 18:39:34 | INFO  | Setting property os_version: 2025-07-04 2025-07-04 18:39:34.259477 | orchestrator | 2025-07-04 18:39:34 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250704.qcow2 2025-07-04 18:39:34.470147 | orchestrator | 2025-07-04 18:39:34 | INFO  | Setting property image_build_date: 2025-07-04 2025-07-04 18:39:34.697965 | orchestrator | 2025-07-04 18:39:34 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-07-04' 2025-07-04 18:39:34.698945 | orchestrator | 2025-07-04 18:39:34 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-07-04' 2025-07-04 18:39:34.922629 | orchestrator | 2025-07-04 18:39:34 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-07-04 18:39:34.922762 | orchestrator | 2025-07-04 18:39:34 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-07-04 18:39:34.923416 | orchestrator | 2025-07-04 18:39:34 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-07-04 18:39:34.924223 | orchestrator | 2025-07-04 18:39:34 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-07-04 18:39:35.597483 | orchestrator | ok: Runtime: 0:03:05.605999 2025-07-04 18:39:35.663702 | 2025-07-04 18:39:35.663836 | TASK [Run checks] 2025-07-04 18:39:36.422604 | orchestrator | + set -e 2025-07-04 18:39:36.422768 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-04 18:39:36.422780 | orchestrator | ++ export INTERACTIVE=false 2025-07-04 18:39:36.422789 | orchestrator | ++ INTERACTIVE=false 2025-07-04 18:39:36.422795 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-04 18:39:36.422800 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-04 18:39:36.422806 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-04 18:39:36.423919 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-04 18:39:36.430367 | orchestrator | 2025-07-04 18:39:36.430438 | orchestrator | # CHECK 2025-07-04 18:39:36.430447 | orchestrator | 2025-07-04 18:39:36.430455 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-04 18:39:36.430465 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-04 18:39:36.430473 | orchestrator | + echo 2025-07-04 18:39:36.430479 | orchestrator | + echo '# CHECK' 2025-07-04 18:39:36.430486 | orchestrator | + echo 2025-07-04 18:39:36.430497 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-04 18:39:36.431061 | orchestrator | ++ semver 9.1.0 5.0.0 2025-07-04 18:39:36.490555 | orchestrator | 2025-07-04 18:39:36.490654 | orchestrator | ## Containers @ testbed-manager 2025-07-04 18:39:36.490689 | orchestrator | 2025-07-04 18:39:36.490704 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-04 18:39:36.490715 | orchestrator | + echo 2025-07-04 18:39:36.490725 | orchestrator | + echo '## Containers @ testbed-manager' 2025-07-04 18:39:36.490737 | orchestrator | + echo 2025-07-04 18:39:36.490747 | orchestrator | + osism container testbed-manager ps 2025-07-04 18:39:38.928214 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-04 18:39:38.928309 | orchestrator | e2c67b42f41f registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2025-07-04 18:39:38.928320 | orchestrator | d47bccb11e14 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_alertmanager 2025-07-04 18:39:38.928324 | orchestrator | 1f28dbd32b49 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-07-04 18:39:38.928332 | orchestrator | e980be5bc939 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-07-04 18:39:38.928336 | orchestrator | aa89211522d1 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2025-07-04 18:39:38.928340 | orchestrator | 89a224c3008d registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2025-07-04 18:39:38.928348 | orchestrator | d6e6788cb1bd registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-07-04 18:39:38.928352 | orchestrator | 141d3a57b2e8 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-07-04 18:39:38.928372 | orchestrator | 5c11210c2a8a registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2025-07-04 18:39:38.928376 | orchestrator | 8778cb46814b phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 33 minutes ago Up 33 minutes (healthy) 80/tcp phpmyadmin 2025-07-04 18:39:38.928380 | orchestrator | 98d7898d1c3d registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 34 minutes ago Up 34 minutes openstackclient 2025-07-04 18:39:38.928384 | orchestrator | 752c52f83430 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 34 minutes ago Up 34 minutes (healthy) 8080/tcp homer 2025-07-04 18:39:38.928388 | orchestrator | 50dd800667ad registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 55 minutes ago Up 55 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-07-04 18:39:38.928394 | orchestrator | f5b21b6d6059 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" 59 minutes ago Up 41 minutes (healthy) manager-inventory_reconciler-1 2025-07-04 18:39:38.928413 | orchestrator | 2d74408d48db registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" 59 minutes ago Up 41 minutes (healthy) osism-ansible 2025-07-04 18:39:38.928417 | orchestrator | 13aab2fef6fc registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" 59 minutes ago Up 41 minutes (healthy) osism-kubernetes 2025-07-04 18:39:38.928421 | orchestrator | 37245437f770 registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" 59 minutes ago Up 41 minutes (healthy) kolla-ansible 2025-07-04 18:39:38.928425 | orchestrator | 29eb5f3ed9b9 registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" 59 minutes ago Up 41 minutes (healthy) ceph-ansible 2025-07-04 18:39:38.928429 | orchestrator | 4d28f273d380 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 59 minutes ago Up 41 minutes (healthy) 8000/tcp manager-ara-server-1 2025-07-04 18:39:38.928433 | orchestrator | c1762aba6a6a registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 59 minutes ago Up 42 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-07-04 18:39:38.928437 | orchestrator | 01afa01bc546 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" 59 minutes ago Up 42 minutes (healthy) osismclient 2025-07-04 18:39:38.928441 | orchestrator | 60b167672cc9 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 59 minutes ago Up 42 minutes (healthy) 6379/tcp manager-redis-1 2025-07-04 18:39:38.928450 | orchestrator | f328264f53dd registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 59 minutes ago Up 42 minutes (healthy) manager-flower-1 2025-07-04 18:39:38.928454 | orchestrator | 87c1c71e28ff registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 59 minutes ago Up 42 minutes (healthy) manager-listener-1 2025-07-04 18:39:38.928458 | orchestrator | 293f95b2d4c5 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 59 minutes ago Up 42 minutes (healthy) manager-openstack-1 2025-07-04 18:39:38.928462 | orchestrator | 0be0743ac3f1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 59 minutes ago Up 42 minutes (healthy) 3306/tcp manager-mariadb-1 2025-07-04 18:39:38.928466 | orchestrator | 2e68146779af registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 59 minutes ago Up 42 minutes (healthy) manager-beat-1 2025-07-04 18:39:38.928470 | orchestrator | fc5545278769 registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-07-04 18:39:39.207378 | orchestrator | 2025-07-04 18:39:39.207463 | orchestrator | ## Images @ testbed-manager 2025-07-04 18:39:39.207471 | orchestrator | 2025-07-04 18:39:39.207476 | orchestrator | + echo 2025-07-04 18:39:39.207481 | orchestrator | + echo '## Images @ testbed-manager' 2025-07-04 18:39:39.207486 | orchestrator | + echo 2025-07-04 18:39:39.207490 | orchestrator | + osism container testbed-manager images 2025-07-04 18:39:41.308494 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-04 18:39:41.308585 | orchestrator | registry.osism.tech/osism/homer v25.05.2 abe3b0971e36 15 hours ago 11.5MB 2025-07-04 18:39:41.308606 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 3821ef84396e 15 hours ago 233MB 2025-07-04 18:39:41.308612 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250530.0 f5f0b51afbcc 4 weeks ago 574MB 2025-07-04 18:39:41.308616 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250531.0 eb6fb0ff8e52 4 weeks ago 578MB 2025-07-04 18:39:41.308638 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 4 weeks ago 319MB 2025-07-04 18:39:41.308642 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 4 weeks ago 747MB 2025-07-04 18:39:41.308646 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 4 weeks ago 629MB 2025-07-04 18:39:41.308651 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250530 48bb7d2c6b08 4 weeks ago 892MB 2025-07-04 18:39:41.308655 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250530 3d4c4d6fe7fa 4 weeks ago 361MB 2025-07-04 18:39:41.308659 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 4 weeks ago 411MB 2025-07-04 18:39:41.308663 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 4 weeks ago 359MB 2025-07-04 18:39:41.308667 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250530 0e447338580d 4 weeks ago 457MB 2025-07-04 18:39:41.308697 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250530.0 bce894afc91f 4 weeks ago 538MB 2025-07-04 18:39:41.308702 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250530.0 467731c31786 4 weeks ago 1.21GB 2025-07-04 18:39:41.308706 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250530.0 1b4e0cdc5cdd 4 weeks ago 308MB 2025-07-04 18:39:41.308710 | orchestrator | registry.osism.tech/osism/osism 0.20250530.0 bce098659f68 5 weeks ago 297MB 2025-07-04 18:39:41.308714 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 5 weeks ago 41.4MB 2025-07-04 18:39:41.308717 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 5 weeks ago 224MB 2025-07-04 18:39:41.308721 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 8 weeks ago 453MB 2025-07-04 18:39:41.308725 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 6b3ebe9793bb 4 months ago 328MB 2025-07-04 18:39:41.308729 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 5 months ago 571MB 2025-07-04 18:39:41.308733 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 10 months ago 300MB 2025-07-04 18:39:41.308736 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 12 months ago 146MB 2025-07-04 18:39:41.557644 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-04 18:39:41.557838 | orchestrator | ++ semver 9.1.0 5.0.0 2025-07-04 18:39:41.603430 | orchestrator | 2025-07-04 18:39:41.603488 | orchestrator | ## Containers @ testbed-node-0 2025-07-04 18:39:41.603494 | orchestrator | 2025-07-04 18:39:41.603507 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-04 18:39:41.603511 | orchestrator | + echo 2025-07-04 18:39:41.603516 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-07-04 18:39:41.603522 | orchestrator | + echo 2025-07-04 18:39:41.603526 | orchestrator | + osism container testbed-node-0 ps 2025-07-04 18:39:43.768434 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-04 18:39:43.768594 | orchestrator | a569ca980a50 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-07-04 18:39:43.768613 | orchestrator | 254faee369df registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-07-04 18:39:43.768625 | orchestrator | f56d751d59c7 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-07-04 18:39:43.768636 | orchestrator | 3a20dda5a03a registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-04 18:39:43.768647 | orchestrator | 4a3690c0ab17 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2025-07-04 18:39:43.768658 | orchestrator | ceb80bdfc780 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-07-04 18:39:43.768669 | orchestrator | 83219148ff70 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2025-07-04 18:39:43.768681 | orchestrator | bd768d2b7837 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2025-07-04 18:39:43.768757 | orchestrator | c3117e2438ac registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-07-04 18:39:43.768786 | orchestrator | 63e25eb1a64d registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-07-04 18:39:43.768797 | orchestrator | 4d889fca416a registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-07-04 18:39:43.768808 | orchestrator | d3a20f1eec66 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-07-04 18:39:43.768819 | orchestrator | 02aba0c1f29f registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-07-04 18:39:43.768830 | orchestrator | 5ce4ba6c1fe7 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-07-04 18:39:43.768841 | orchestrator | 7227a474615e registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-07-04 18:39:43.768852 | orchestrator | 92e91425bd8d registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-07-04 18:39:43.768863 | orchestrator | d379865e15e4 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-07-04 18:39:43.768874 | orchestrator | 30512c54fc82 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-07-04 18:39:43.768885 | orchestrator | 62a6db1efc83 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-07-04 18:39:43.768915 | orchestrator | 1e86bc62ff2f registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2025-07-04 18:39:43.768927 | orchestrator | 254b874dd8af registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2025-07-04 18:39:43.768938 | orchestrator | 3dfbe573b30e registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2025-07-04 18:39:43.768949 | orchestrator | a428ed4abd8b registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-07-04 18:39:43.768960 | orchestrator | 507b4db4efbd registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-07-04 18:39:43.768971 | orchestrator | cd7d51193ee7 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-07-04 18:39:43.768982 | orchestrator | f83f77b0bf43 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2025-07-04 18:39:43.769008 | orchestrator | 376ccbf9d776 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2025-07-04 18:39:43.769019 | orchestrator | 7563a1377f83 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-07-04 18:39:43.769030 | orchestrator | b6a79a1d2882 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-04 18:39:43.769041 | orchestrator | 0fc3d222e613 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-07-04 18:39:43.769058 | orchestrator | 10ddca18d408 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-07-04 18:39:43.769069 | orchestrator | 23bf6501270a registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-07-04 18:39:43.769080 | orchestrator | 66d683cb06d3 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2025-07-04 18:39:43.769091 | orchestrator | a714a91bc5d9 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2025-07-04 18:39:43.769102 | orchestrator | f26c19f01d7e registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2025-07-04 18:39:43.769113 | orchestrator | c6081e8f900c registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-07-04 18:39:43.769124 | orchestrator | 919e10f1bf31 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-07-04 18:39:43.769135 | orchestrator | 36d4b5a1171c registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-07-04 18:39:43.769151 | orchestrator | 1f82d8836140 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2025-07-04 18:39:43.769162 | orchestrator | f09029ecb5b0 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-07-04 18:39:43.769179 | orchestrator | ebc741daa522 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-07-04 18:39:43.769191 | orchestrator | cc98b65f5b3b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-0 2025-07-04 18:39:43.769202 | orchestrator | 6246a5d6c0d1 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-07-04 18:39:43.769213 | orchestrator | 6e65ac72db0f registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2025-07-04 18:39:43.769231 | orchestrator | e63ebaf25938 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-07-04 18:39:43.769242 | orchestrator | b0ac42fb2519 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2025-07-04 18:39:43.769253 | orchestrator | 9180ce139c29 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-07-04 18:39:43.769264 | orchestrator | 05821ab0d630 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-07-04 18:39:43.769275 | orchestrator | 0a391c1c07f3 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-07-04 18:39:43.769286 | orchestrator | 427db3b674db registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-07-04 18:39:43.769297 | orchestrator | 396d46410e01 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2025-07-04 18:39:43.769308 | orchestrator | 6f1ce7ba8969 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 34 minutes ago Up 34 minutes fluentd 2025-07-04 18:39:44.023624 | orchestrator | 2025-07-04 18:39:44.023754 | orchestrator | ## Images @ testbed-node-0 2025-07-04 18:39:44.023768 | orchestrator | 2025-07-04 18:39:44.023776 | orchestrator | + echo 2025-07-04 18:39:44.023790 | orchestrator | + echo '## Images @ testbed-node-0' 2025-07-04 18:39:44.023803 | orchestrator | + echo 2025-07-04 18:39:44.023815 | orchestrator | + osism container testbed-node-0 images 2025-07-04 18:39:46.263626 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-04 18:39:46.263834 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 4 weeks ago 319MB 2025-07-04 18:39:46.263863 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 4 weeks ago 319MB 2025-07-04 18:39:46.263882 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 4 weeks ago 330MB 2025-07-04 18:39:46.263901 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 4 weeks ago 1.59GB 2025-07-04 18:39:46.263922 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 4 weeks ago 1.55GB 2025-07-04 18:39:46.263941 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 4 weeks ago 419MB 2025-07-04 18:39:46.263960 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 4 weeks ago 747MB 2025-07-04 18:39:46.263979 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 4 weeks ago 327MB 2025-07-04 18:39:46.263998 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 4 weeks ago 376MB 2025-07-04 18:39:46.264017 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 4 weeks ago 629MB 2025-07-04 18:39:46.264035 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 4 weeks ago 1.01GB 2025-07-04 18:39:46.264055 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 4 weeks ago 591MB 2025-07-04 18:39:46.264107 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 4 weeks ago 354MB 2025-07-04 18:39:46.264125 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 4 weeks ago 352MB 2025-07-04 18:39:46.264142 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 4 weeks ago 411MB 2025-07-04 18:39:46.264159 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 4 weeks ago 345MB 2025-07-04 18:39:46.264177 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 4 weeks ago 359MB 2025-07-04 18:39:46.264195 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 4 weeks ago 326MB 2025-07-04 18:39:46.264215 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 4 weeks ago 325MB 2025-07-04 18:39:46.264232 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 4 weeks ago 1.21GB 2025-07-04 18:39:46.264272 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 4 weeks ago 362MB 2025-07-04 18:39:46.264291 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 4 weeks ago 362MB 2025-07-04 18:39:46.264307 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 4 weeks ago 1.15GB 2025-07-04 18:39:46.264323 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 4 weeks ago 1.04GB 2025-07-04 18:39:46.264340 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 4 weeks ago 1.25GB 2025-07-04 18:39:46.264357 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250530 ec3349a6437e 4 weeks ago 1.04GB 2025-07-04 18:39:46.264374 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250530 726d5cfde6f9 4 weeks ago 1.04GB 2025-07-04 18:39:46.264391 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250530 c2f966fc60ed 4 weeks ago 1.04GB 2025-07-04 18:39:46.264409 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250530 7c85bdb64788 4 weeks ago 1.04GB 2025-07-04 18:39:46.264425 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 4 weeks ago 1.2GB 2025-07-04 18:39:46.264442 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 4 weeks ago 1.31GB 2025-07-04 18:39:46.264495 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 4 weeks ago 1.12GB 2025-07-04 18:39:46.264514 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 4 weeks ago 1.12GB 2025-07-04 18:39:46.264530 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 4 weeks ago 1.1GB 2025-07-04 18:39:46.264545 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 4 weeks ago 1.1GB 2025-07-04 18:39:46.264561 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 4 weeks ago 1.1GB 2025-07-04 18:39:46.264577 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 4 weeks ago 1.41GB 2025-07-04 18:39:46.264593 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 4 weeks ago 1.41GB 2025-07-04 18:39:46.264610 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 4 weeks ago 1.06GB 2025-07-04 18:39:46.264641 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 4 weeks ago 1.06GB 2025-07-04 18:39:46.264659 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 4 weeks ago 1.05GB 2025-07-04 18:39:46.264675 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 4 weeks ago 1.05GB 2025-07-04 18:39:46.264692 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 4 weeks ago 1.05GB 2025-07-04 18:39:46.264735 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 4 weeks ago 1.05GB 2025-07-04 18:39:46.264752 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250530 aa9066568160 4 weeks ago 1.04GB 2025-07-04 18:39:46.264778 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250530 546dea2f2472 4 weeks ago 1.04GB 2025-07-04 18:39:46.264795 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 4 weeks ago 1.3GB 2025-07-04 18:39:46.264811 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 4 weeks ago 1.29GB 2025-07-04 18:39:46.264826 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 4 weeks ago 1.42GB 2025-07-04 18:39:46.264836 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 4 weeks ago 1.29GB 2025-07-04 18:39:46.264846 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 4 weeks ago 1.06GB 2025-07-04 18:39:46.264855 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 4 weeks ago 1.06GB 2025-07-04 18:39:46.264865 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 4 weeks ago 1.06GB 2025-07-04 18:39:46.264874 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 4 weeks ago 1.11GB 2025-07-04 18:39:46.264884 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 4 weeks ago 1.13GB 2025-07-04 18:39:46.264893 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 4 weeks ago 1.11GB 2025-07-04 18:39:46.264903 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250530 df0a04869ff0 4 weeks ago 1.11GB 2025-07-04 18:39:46.264912 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250530 e1b2b0cc8e5c 4 weeks ago 1.12GB 2025-07-04 18:39:46.264921 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 4 weeks ago 947MB 2025-07-04 18:39:46.264931 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 4 weeks ago 948MB 2025-07-04 18:39:46.264940 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 4 weeks ago 947MB 2025-07-04 18:39:46.264950 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 4 weeks ago 948MB 2025-07-04 18:39:46.264959 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 weeks ago 1.27GB 2025-07-04 18:39:46.537003 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-04 18:39:46.537141 | orchestrator | ++ semver 9.1.0 5.0.0 2025-07-04 18:39:46.585325 | orchestrator | 2025-07-04 18:39:46.585441 | orchestrator | ## Containers @ testbed-node-1 2025-07-04 18:39:46.585457 | orchestrator | 2025-07-04 18:39:46.585469 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-04 18:39:46.585508 | orchestrator | + echo 2025-07-04 18:39:46.585521 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-07-04 18:39:46.585533 | orchestrator | + echo 2025-07-04 18:39:46.585544 | orchestrator | + osism container testbed-node-1 ps 2025-07-04 18:39:48.860311 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-04 18:39:48.860435 | orchestrator | 54a5d666d1b0 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-07-04 18:39:48.860460 | orchestrator | 8eef9a8659bb registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-07-04 18:39:48.860480 | orchestrator | a597316071a9 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-07-04 18:39:48.860496 | orchestrator | 230e93eddf2a registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-07-04 18:39:48.860527 | orchestrator | ed2314965bba registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-04 18:39:48.860539 | orchestrator | 7054b4f25a55 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-07-04 18:39:48.860550 | orchestrator | e3bd49c469f8 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2025-07-04 18:39:48.860561 | orchestrator | 25bb15e502a5 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2025-07-04 18:39:48.860573 | orchestrator | 4129351e91d5 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-07-04 18:39:48.860586 | orchestrator | 512c8ffedf0b registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-07-04 18:39:48.860598 | orchestrator | 542448daa777 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-07-04 18:39:48.860608 | orchestrator | 7b701c1c4954 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-07-04 18:39:48.860620 | orchestrator | 64f3ae87cd12 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-07-04 18:39:48.860630 | orchestrator | 31f8b11bec94 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-07-04 18:39:48.860641 | orchestrator | fb1b76b43a47 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-07-04 18:39:48.860652 | orchestrator | 977fddd6009d registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-07-04 18:39:48.860663 | orchestrator | 9fc20896682b registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-07-04 18:39:48.860702 | orchestrator | 9e7e1aa364aa registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-07-04 18:39:48.860751 | orchestrator | d2cbd8e24fff registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-07-04 18:39:48.860782 | orchestrator | 59e3962e2b09 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2025-07-04 18:39:48.860794 | orchestrator | 9fdf205afb92 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2025-07-04 18:39:48.860804 | orchestrator | 81a331b3c507 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2025-07-04 18:39:48.860815 | orchestrator | 8833b00ee4a1 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-07-04 18:39:48.860833 | orchestrator | 23e73824eae3 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-07-04 18:39:48.860853 | orchestrator | c2d4fcea871b registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-07-04 18:39:48.860866 | orchestrator | 4ae810196c1e registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2025-07-04 18:39:48.860879 | orchestrator | 96e1cf594fab registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2025-07-04 18:39:48.860891 | orchestrator | cd6896f6635a registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-07-04 18:39:48.860905 | orchestrator | 31400c036b78 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-04 18:39:48.860918 | orchestrator | b35067420328 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-04 18:39:48.860930 | orchestrator | 0d2b59076705 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-07-04 18:39:48.860942 | orchestrator | b833c293c215 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-07-04 18:39:48.860955 | orchestrator | 561777a6f67d registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2025-07-04 18:39:48.860968 | orchestrator | 7f78f854a4ea registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-07-04 18:39:48.860980 | orchestrator | 16628781b800 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-07-04 18:39:48.861001 | orchestrator | d519c52dd64e registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-07-04 18:39:48.861014 | orchestrator | 4cea9f855632 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-07-04 18:39:48.861026 | orchestrator | 35e46e0243f7 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-07-04 18:39:48.861039 | orchestrator | 30a315851dea registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2025-07-04 18:39:48.861053 | orchestrator | 8c7260c744b6 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2025-07-04 18:39:48.861072 | orchestrator | 9981fa51cd7c registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-07-04 18:39:48.861084 | orchestrator | ac2816ff6098 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-1 2025-07-04 18:39:48.861095 | orchestrator | 65cfd5c2f757 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-07-04 18:39:48.861106 | orchestrator | 1fd736af2267 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-07-04 18:39:48.861116 | orchestrator | 06cbf61ff15d registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 31 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-07-04 18:39:48.861127 | orchestrator | 882ec7321f16 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2025-07-04 18:39:48.861138 | orchestrator | 51a2bf12d0b6 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-07-04 18:39:48.861149 | orchestrator | b5796bfc12cf registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-07-04 18:39:48.861165 | orchestrator | 635a1bcad764 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-07-04 18:39:48.861177 | orchestrator | 0abc5a535a17 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-07-04 18:39:48.861188 | orchestrator | bab446707063 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-07-04 18:39:48.861199 | orchestrator | dd6a2f1344f4 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2025-07-04 18:39:49.148645 | orchestrator | 2025-07-04 18:39:49.148808 | orchestrator | ## Images @ testbed-node-1 2025-07-04 18:39:49.148827 | orchestrator | 2025-07-04 18:39:49.148840 | orchestrator | + echo 2025-07-04 18:39:49.148852 | orchestrator | + echo '## Images @ testbed-node-1' 2025-07-04 18:39:49.148890 | orchestrator | + echo 2025-07-04 18:39:49.148902 | orchestrator | + osism container testbed-node-1 images 2025-07-04 18:39:51.257292 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-04 18:39:51.257399 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 4 weeks ago 319MB 2025-07-04 18:39:51.257413 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 4 weeks ago 319MB 2025-07-04 18:39:51.257425 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 4 weeks ago 330MB 2025-07-04 18:39:51.257436 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 4 weeks ago 1.59GB 2025-07-04 18:39:51.257447 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 4 weeks ago 1.55GB 2025-07-04 18:39:51.257458 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 4 weeks ago 419MB 2025-07-04 18:39:51.257468 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 4 weeks ago 747MB 2025-07-04 18:39:51.257479 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 4 weeks ago 376MB 2025-07-04 18:39:51.257490 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 4 weeks ago 327MB 2025-07-04 18:39:51.257501 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 4 weeks ago 629MB 2025-07-04 18:39:51.257511 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 4 weeks ago 1.01GB 2025-07-04 18:39:51.257522 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 4 weeks ago 591MB 2025-07-04 18:39:51.257533 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 4 weeks ago 354MB 2025-07-04 18:39:51.257544 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 4 weeks ago 411MB 2025-07-04 18:39:51.257554 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 4 weeks ago 352MB 2025-07-04 18:39:51.257565 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 4 weeks ago 345MB 2025-07-04 18:39:51.257575 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 4 weeks ago 359MB 2025-07-04 18:39:51.257586 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 4 weeks ago 325MB 2025-07-04 18:39:51.257597 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 4 weeks ago 326MB 2025-07-04 18:39:51.257607 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 4 weeks ago 1.21GB 2025-07-04 18:39:51.257618 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 4 weeks ago 362MB 2025-07-04 18:39:51.257629 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 4 weeks ago 362MB 2025-07-04 18:39:51.257640 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 4 weeks ago 1.15GB 2025-07-04 18:39:51.257651 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 4 weeks ago 1.04GB 2025-07-04 18:39:51.257662 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 4 weeks ago 1.25GB 2025-07-04 18:39:51.257697 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 4 weeks ago 1.2GB 2025-07-04 18:39:51.257709 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 4 weeks ago 1.31GB 2025-07-04 18:39:51.257760 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 4 weeks ago 1.41GB 2025-07-04 18:39:51.257771 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 4 weeks ago 1.41GB 2025-07-04 18:39:51.257796 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 4 weeks ago 1.06GB 2025-07-04 18:39:51.257807 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 4 weeks ago 1.06GB 2025-07-04 18:39:51.257846 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 4 weeks ago 1.05GB 2025-07-04 18:39:51.257861 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 4 weeks ago 1.05GB 2025-07-04 18:39:51.257890 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 4 weeks ago 1.05GB 2025-07-04 18:39:51.257903 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 4 weeks ago 1.05GB 2025-07-04 18:39:51.257916 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 4 weeks ago 1.3GB 2025-07-04 18:39:51.257928 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 4 weeks ago 1.29GB 2025-07-04 18:39:51.257941 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 4 weeks ago 1.42GB 2025-07-04 18:39:51.257954 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 4 weeks ago 1.29GB 2025-07-04 18:39:51.257966 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 4 weeks ago 1.06GB 2025-07-04 18:39:51.257985 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 4 weeks ago 1.06GB 2025-07-04 18:39:51.257998 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 4 weeks ago 1.06GB 2025-07-04 18:39:51.258010 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 4 weeks ago 1.11GB 2025-07-04 18:39:51.258073 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 4 weeks ago 1.13GB 2025-07-04 18:39:51.258086 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 4 weeks ago 1.11GB 2025-07-04 18:39:51.258099 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 4 weeks ago 947MB 2025-07-04 18:39:51.258112 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 4 weeks ago 947MB 2025-07-04 18:39:51.258124 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 4 weeks ago 948MB 2025-07-04 18:39:51.258137 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 4 weeks ago 948MB 2025-07-04 18:39:51.258149 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 weeks ago 1.27GB 2025-07-04 18:39:51.520618 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-04 18:39:51.521162 | orchestrator | ++ semver 9.1.0 5.0.0 2025-07-04 18:39:51.588351 | orchestrator | 2025-07-04 18:39:51.588490 | orchestrator | ## Containers @ testbed-node-2 2025-07-04 18:39:51.588508 | orchestrator | 2025-07-04 18:39:51.588520 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-04 18:39:51.588531 | orchestrator | + echo 2025-07-04 18:39:51.588544 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-07-04 18:39:51.588556 | orchestrator | + echo 2025-07-04 18:39:51.588567 | orchestrator | + osism container testbed-node-2 ps 2025-07-04 18:39:53.904964 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-04 18:39:53.905075 | orchestrator | 10b239c9d9e1 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-07-04 18:39:53.905092 | orchestrator | f5ae7dacf03e registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-07-04 18:39:53.905103 | orchestrator | a0263272caf2 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-07-04 18:39:53.905115 | orchestrator | 6de88c5564b0 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-07-04 18:39:53.905126 | orchestrator | 26dfbb052728 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-04 18:39:53.905137 | orchestrator | 397a81efd572 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2025-07-04 18:39:53.905148 | orchestrator | 69572407599d registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2025-07-04 18:39:53.905159 | orchestrator | 6bc5fb3cae64 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2025-07-04 18:39:53.905170 | orchestrator | 24174124a774 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-07-04 18:39:53.905184 | orchestrator | 83e4fa2d9c50 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-07-04 18:39:53.905195 | orchestrator | 5d110855d163 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-07-04 18:39:53.905206 | orchestrator | aa0114744005 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-07-04 18:39:53.905220 | orchestrator | 1004cc6ab627 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-07-04 18:39:53.905239 | orchestrator | cf6b147a1ed7 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-07-04 18:39:53.905258 | orchestrator | 889b80092991 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-07-04 18:39:53.905276 | orchestrator | 2c16795f3cfa registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-07-04 18:39:53.905358 | orchestrator | d59592300540 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2025-07-04 18:39:53.905394 | orchestrator | 161e8d00878c registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-07-04 18:39:53.905414 | orchestrator | 7daaa68661b9 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-07-04 18:39:53.905461 | orchestrator | 8125809e8500 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2025-07-04 18:39:53.905483 | orchestrator | a6a4797a6ab2 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2025-07-04 18:39:53.905504 | orchestrator | 822af2e38278 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2025-07-04 18:39:53.905523 | orchestrator | dff0dd662320 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-07-04 18:39:53.905543 | orchestrator | 29579b136223 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-07-04 18:39:53.905556 | orchestrator | de5e08e3c766 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2025-07-04 18:39:53.905569 | orchestrator | 99898358167e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2025-07-04 18:39:53.905581 | orchestrator | 5ebd6f2e949f registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2025-07-04 18:39:53.905594 | orchestrator | a726f87f35a5 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-07-04 18:39:53.905606 | orchestrator | 2beeb48b7df1 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-04 18:39:53.905619 | orchestrator | 88885b614ac5 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-04 18:39:53.905631 | orchestrator | c5705dba488a registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-07-04 18:39:53.905644 | orchestrator | 0628b926c18d registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-07-04 18:39:53.905656 | orchestrator | 02f69c7d683b registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-07-04 18:39:53.905669 | orchestrator | 8f4e727d9b50 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-07-04 18:39:53.905694 | orchestrator | c133be20f5ee registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2025-07-04 18:39:53.905708 | orchestrator | 92cc1d9fd2db registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-07-04 18:39:53.905743 | orchestrator | 103b7de7e227 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-07-04 18:39:53.905756 | orchestrator | ed98bdfc27b7 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-07-04 18:39:53.905769 | orchestrator | 00ec5da314fa registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2025-07-04 18:39:53.905781 | orchestrator | 1374f14482dd registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-07-04 18:39:53.905802 | orchestrator | 488b84ca14f2 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-07-04 18:39:53.905815 | orchestrator | b5c155a92f18 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-2 2025-07-04 18:39:53.905827 | orchestrator | a6757fd283a0 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-07-04 18:39:53.905841 | orchestrator | 2394a7ce111c registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-07-04 18:39:53.905853 | orchestrator | 45668822e1a6 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 31 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-07-04 18:39:53.905864 | orchestrator | fcd12ea68871 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2025-07-04 18:39:53.905874 | orchestrator | b3c7ebda7d47 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-07-04 18:39:53.905885 | orchestrator | a790a759f60b registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-07-04 18:39:53.905896 | orchestrator | 330f43ad62fa registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-07-04 18:39:53.905914 | orchestrator | 918839eddec3 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-07-04 18:39:53.905926 | orchestrator | 8761bfb25bee registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-07-04 18:39:53.905960 | orchestrator | d08cc772e39b registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2025-07-04 18:39:54.177767 | orchestrator | 2025-07-04 18:39:54.177870 | orchestrator | ## Images @ testbed-node-2 2025-07-04 18:39:54.177910 | orchestrator | 2025-07-04 18:39:54.177922 | orchestrator | + echo 2025-07-04 18:39:54.177932 | orchestrator | + echo '## Images @ testbed-node-2' 2025-07-04 18:39:54.177947 | orchestrator | + echo 2025-07-04 18:39:54.177965 | orchestrator | + osism container testbed-node-2 images 2025-07-04 18:39:56.489930 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-04 18:39:56.490088 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 4 weeks ago 319MB 2025-07-04 18:39:56.490105 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 4 weeks ago 319MB 2025-07-04 18:39:56.490126 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 4 weeks ago 330MB 2025-07-04 18:39:56.490132 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 4 weeks ago 1.59GB 2025-07-04 18:39:56.490139 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 4 weeks ago 1.55GB 2025-07-04 18:39:56.490145 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 4 weeks ago 419MB 2025-07-04 18:39:56.490151 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 4 weeks ago 747MB 2025-07-04 18:39:56.490158 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 4 weeks ago 327MB 2025-07-04 18:39:56.490164 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 4 weeks ago 376MB 2025-07-04 18:39:56.490170 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 4 weeks ago 629MB 2025-07-04 18:39:56.490176 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 4 weeks ago 1.01GB 2025-07-04 18:39:56.490182 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 4 weeks ago 591MB 2025-07-04 18:39:56.490189 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 4 weeks ago 354MB 2025-07-04 18:39:56.490195 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 4 weeks ago 411MB 2025-07-04 18:39:56.490201 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 4 weeks ago 352MB 2025-07-04 18:39:56.490208 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 4 weeks ago 345MB 2025-07-04 18:39:56.490214 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 4 weeks ago 359MB 2025-07-04 18:39:56.490221 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 4 weeks ago 325MB 2025-07-04 18:39:56.490227 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 4 weeks ago 326MB 2025-07-04 18:39:56.490233 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 4 weeks ago 1.21GB 2025-07-04 18:39:56.490239 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 4 weeks ago 362MB 2025-07-04 18:39:56.490245 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 4 weeks ago 362MB 2025-07-04 18:39:56.490251 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 4 weeks ago 1.15GB 2025-07-04 18:39:56.490257 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 4 weeks ago 1.04GB 2025-07-04 18:39:56.490279 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 4 weeks ago 1.25GB 2025-07-04 18:39:56.490286 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 4 weeks ago 1.2GB 2025-07-04 18:39:56.490292 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 4 weeks ago 1.31GB 2025-07-04 18:39:56.490298 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 4 weeks ago 1.41GB 2025-07-04 18:39:56.490305 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 4 weeks ago 1.41GB 2025-07-04 18:39:56.490311 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 4 weeks ago 1.06GB 2025-07-04 18:39:56.490317 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 4 weeks ago 1.06GB 2025-07-04 18:39:56.490338 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 4 weeks ago 1.05GB 2025-07-04 18:39:56.490344 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 4 weeks ago 1.05GB 2025-07-04 18:39:56.490350 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 4 weeks ago 1.05GB 2025-07-04 18:39:56.490357 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 4 weeks ago 1.05GB 2025-07-04 18:39:56.490364 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 4 weeks ago 1.3GB 2025-07-04 18:39:56.490370 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 4 weeks ago 1.29GB 2025-07-04 18:39:56.490376 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 4 weeks ago 1.42GB 2025-07-04 18:39:56.490382 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 4 weeks ago 1.29GB 2025-07-04 18:39:56.490388 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 4 weeks ago 1.06GB 2025-07-04 18:39:56.490394 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 4 weeks ago 1.06GB 2025-07-04 18:39:56.490400 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 4 weeks ago 1.06GB 2025-07-04 18:39:56.490406 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 4 weeks ago 1.11GB 2025-07-04 18:39:56.490413 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 4 weeks ago 1.13GB 2025-07-04 18:39:56.490419 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 4 weeks ago 1.11GB 2025-07-04 18:39:56.490425 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 4 weeks ago 947MB 2025-07-04 18:39:56.490431 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 4 weeks ago 947MB 2025-07-04 18:39:56.490437 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 4 weeks ago 948MB 2025-07-04 18:39:56.490443 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 4 weeks ago 948MB 2025-07-04 18:39:56.490450 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 weeks ago 1.27GB 2025-07-04 18:39:56.759496 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-07-04 18:39:56.766283 | orchestrator | + set -e 2025-07-04 18:39:56.766323 | orchestrator | + source /opt/manager-vars.sh 2025-07-04 18:39:56.767300 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-04 18:39:56.767323 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-04 18:39:56.767331 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-04 18:39:56.767339 | orchestrator | ++ CEPH_VERSION=reef 2025-07-04 18:39:56.767347 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-04 18:39:56.767356 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-04 18:39:56.767364 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-04 18:39:56.767393 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-04 18:39:56.767401 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-04 18:39:56.767409 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-04 18:39:56.767417 | orchestrator | ++ export ARA=false 2025-07-04 18:39:56.767425 | orchestrator | ++ ARA=false 2025-07-04 18:39:56.767433 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-04 18:39:56.767441 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-04 18:39:56.767450 | orchestrator | ++ export TEMPEST=false 2025-07-04 18:39:56.767458 | orchestrator | ++ TEMPEST=false 2025-07-04 18:39:56.767466 | orchestrator | ++ export IS_ZUUL=true 2025-07-04 18:39:56.767474 | orchestrator | ++ IS_ZUUL=true 2025-07-04 18:39:56.767482 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-07-04 18:39:56.767490 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-07-04 18:39:56.767498 | orchestrator | ++ export EXTERNAL_API=false 2025-07-04 18:39:56.767506 | orchestrator | ++ EXTERNAL_API=false 2025-07-04 18:39:56.767514 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-04 18:39:56.767522 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-04 18:39:56.767529 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-04 18:39:56.767537 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-04 18:39:56.767545 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-04 18:39:56.767553 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-04 18:39:56.767561 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-04 18:39:56.767569 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-07-04 18:39:56.776198 | orchestrator | + set -e 2025-07-04 18:39:56.776264 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-04 18:39:56.776273 | orchestrator | ++ export INTERACTIVE=false 2025-07-04 18:39:56.776282 | orchestrator | ++ INTERACTIVE=false 2025-07-04 18:39:56.776288 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-04 18:39:56.776295 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-04 18:39:56.776301 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-04 18:39:56.776309 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-04 18:39:56.779331 | orchestrator | 2025-07-04 18:39:56.779352 | orchestrator | # Ceph status 2025-07-04 18:39:56.779359 | orchestrator | 2025-07-04 18:39:56.779365 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-04 18:39:56.779371 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-04 18:39:56.779378 | orchestrator | + echo 2025-07-04 18:39:56.779388 | orchestrator | + echo '# Ceph status' 2025-07-04 18:39:56.779395 | orchestrator | + echo 2025-07-04 18:39:56.779402 | orchestrator | + ceph -s 2025-07-04 18:39:57.386137 | orchestrator | cluster: 2025-07-04 18:39:57.386231 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-07-04 18:39:57.386242 | orchestrator | health: HEALTH_OK 2025-07-04 18:39:57.386250 | orchestrator | 2025-07-04 18:39:57.386258 | orchestrator | services: 2025-07-04 18:39:57.386266 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 29m) 2025-07-04 18:39:57.386275 | orchestrator | mgr: testbed-node-1(active, since 16m), standbys: testbed-node-0, testbed-node-2 2025-07-04 18:39:57.386284 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-07-04 18:39:57.386291 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 26m) 2025-07-04 18:39:57.386299 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-07-04 18:39:57.386306 | orchestrator | 2025-07-04 18:39:57.386313 | orchestrator | data: 2025-07-04 18:39:57.386320 | orchestrator | volumes: 1/1 healthy 2025-07-04 18:39:57.386327 | orchestrator | pools: 14 pools, 401 pgs 2025-07-04 18:39:57.386334 | orchestrator | objects: 521 objects, 2.2 GiB 2025-07-04 18:39:57.386341 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-07-04 18:39:57.386348 | orchestrator | pgs: 401 active+clean 2025-07-04 18:39:57.386356 | orchestrator | 2025-07-04 18:39:57.431834 | orchestrator | 2025-07-04 18:39:57.431913 | orchestrator | # Ceph versions 2025-07-04 18:39:57.431922 | orchestrator | 2025-07-04 18:39:57.431931 | orchestrator | + echo 2025-07-04 18:39:57.431938 | orchestrator | + echo '# Ceph versions' 2025-07-04 18:39:57.431970 | orchestrator | + echo 2025-07-04 18:39:57.431977 | orchestrator | + ceph versions 2025-07-04 18:39:58.036074 | orchestrator | { 2025-07-04 18:39:58.036181 | orchestrator | "mon": { 2025-07-04 18:39:58.036196 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-04 18:39:58.036210 | orchestrator | }, 2025-07-04 18:39:58.036221 | orchestrator | "mgr": { 2025-07-04 18:39:58.036252 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-04 18:39:58.036272 | orchestrator | }, 2025-07-04 18:39:58.036292 | orchestrator | "osd": { 2025-07-04 18:39:58.036310 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-07-04 18:39:58.036327 | orchestrator | }, 2025-07-04 18:39:58.036345 | orchestrator | "mds": { 2025-07-04 18:39:58.036364 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-04 18:39:58.036381 | orchestrator | }, 2025-07-04 18:39:58.036399 | orchestrator | "rgw": { 2025-07-04 18:39:58.036418 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-04 18:39:58.036437 | orchestrator | }, 2025-07-04 18:39:58.036456 | orchestrator | "overall": { 2025-07-04 18:39:58.036477 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-07-04 18:39:58.036497 | orchestrator | } 2025-07-04 18:39:58.036510 | orchestrator | } 2025-07-04 18:39:58.089966 | orchestrator | 2025-07-04 18:39:58.090127 | orchestrator | # Ceph OSD tree 2025-07-04 18:39:58.090141 | orchestrator | 2025-07-04 18:39:58.090151 | orchestrator | + echo 2025-07-04 18:39:58.090162 | orchestrator | + echo '# Ceph OSD tree' 2025-07-04 18:39:58.090172 | orchestrator | + echo 2025-07-04 18:39:58.090183 | orchestrator | + ceph osd df tree 2025-07-04 18:39:58.711715 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-07-04 18:39:58.711845 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-07-04 18:39:58.711857 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-07-04 18:39:58.711865 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.0 GiB 963 MiB 1 KiB 74 MiB 19 GiB 5.06 0.86 174 up osd.0 2025-07-04 18:39:58.711873 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.77 1.14 218 up osd.3 2025-07-04 18:39:58.711881 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-07-04 18:39:58.711889 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.49 1.10 191 up osd.2 2025-07-04 18:39:58.711897 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1019 MiB 1 KiB 74 MiB 19 GiB 5.34 0.90 197 up osd.5 2025-07-04 18:39:58.711905 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2025-07-04 18:39:58.711913 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.88 1.16 204 up osd.1 2025-07-04 18:39:58.711921 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1012 MiB 939 MiB 1 KiB 74 MiB 19 GiB 4.95 0.84 186 up osd.4 2025-07-04 18:39:58.711929 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-07-04 18:39:58.711937 | orchestrator | MIN/MAX VAR: 0.84/1.16 STDDEV: 0.82 2025-07-04 18:39:58.761156 | orchestrator | 2025-07-04 18:39:58.761281 | orchestrator | # Ceph monitor status 2025-07-04 18:39:58.761299 | orchestrator | 2025-07-04 18:39:58.761311 | orchestrator | + echo 2025-07-04 18:39:58.761323 | orchestrator | + echo '# Ceph monitor status' 2025-07-04 18:39:58.761334 | orchestrator | + echo 2025-07-04 18:39:58.761345 | orchestrator | + ceph mon stat 2025-07-04 18:39:59.370174 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-07-04 18:39:59.415446 | orchestrator | 2025-07-04 18:39:59.415576 | orchestrator | # Ceph quorum status 2025-07-04 18:39:59.415617 | orchestrator | 2025-07-04 18:39:59.415637 | orchestrator | + echo 2025-07-04 18:39:59.415657 | orchestrator | + echo '# Ceph quorum status' 2025-07-04 18:39:59.415678 | orchestrator | + echo 2025-07-04 18:39:59.415697 | orchestrator | + ceph quorum_status 2025-07-04 18:39:59.415734 | orchestrator | + jq 2025-07-04 18:40:00.011549 | orchestrator | { 2025-07-04 18:40:00.011642 | orchestrator | "election_epoch": 6, 2025-07-04 18:40:00.011655 | orchestrator | "quorum": [ 2025-07-04 18:40:00.011662 | orchestrator | 0, 2025-07-04 18:40:00.011670 | orchestrator | 1, 2025-07-04 18:40:00.011678 | orchestrator | 2 2025-07-04 18:40:00.011685 | orchestrator | ], 2025-07-04 18:40:00.011692 | orchestrator | "quorum_names": [ 2025-07-04 18:40:00.011699 | orchestrator | "testbed-node-0", 2025-07-04 18:40:00.011705 | orchestrator | "testbed-node-1", 2025-07-04 18:40:00.011711 | orchestrator | "testbed-node-2" 2025-07-04 18:40:00.011718 | orchestrator | ], 2025-07-04 18:40:00.011724 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-07-04 18:40:00.011732 | orchestrator | "quorum_age": 1746, 2025-07-04 18:40:00.011762 | orchestrator | "features": { 2025-07-04 18:40:00.011768 | orchestrator | "quorum_con": "4540138322906710015", 2025-07-04 18:40:00.011775 | orchestrator | "quorum_mon": [ 2025-07-04 18:40:00.011782 | orchestrator | "kraken", 2025-07-04 18:40:00.011788 | orchestrator | "luminous", 2025-07-04 18:40:00.011803 | orchestrator | "mimic", 2025-07-04 18:40:00.011810 | orchestrator | "osdmap-prune", 2025-07-04 18:40:00.011824 | orchestrator | "nautilus", 2025-07-04 18:40:00.011831 | orchestrator | "octopus", 2025-07-04 18:40:00.011837 | orchestrator | "pacific", 2025-07-04 18:40:00.011844 | orchestrator | "elector-pinging", 2025-07-04 18:40:00.011850 | orchestrator | "quincy", 2025-07-04 18:40:00.011856 | orchestrator | "reef" 2025-07-04 18:40:00.011863 | orchestrator | ] 2025-07-04 18:40:00.011869 | orchestrator | }, 2025-07-04 18:40:00.011876 | orchestrator | "monmap": { 2025-07-04 18:40:00.011883 | orchestrator | "epoch": 1, 2025-07-04 18:40:00.011890 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-07-04 18:40:00.011898 | orchestrator | "modified": "2025-07-04T18:10:31.447086Z", 2025-07-04 18:40:00.011905 | orchestrator | "created": "2025-07-04T18:10:31.447086Z", 2025-07-04 18:40:00.011911 | orchestrator | "min_mon_release": 18, 2025-07-04 18:40:00.011918 | orchestrator | "min_mon_release_name": "reef", 2025-07-04 18:40:00.011924 | orchestrator | "election_strategy": 1, 2025-07-04 18:40:00.011930 | orchestrator | "disallowed_leaders: ": "", 2025-07-04 18:40:00.011936 | orchestrator | "stretch_mode": false, 2025-07-04 18:40:00.011942 | orchestrator | "tiebreaker_mon": "", 2025-07-04 18:40:00.011949 | orchestrator | "removed_ranks: ": "", 2025-07-04 18:40:00.011955 | orchestrator | "features": { 2025-07-04 18:40:00.011961 | orchestrator | "persistent": [ 2025-07-04 18:40:00.011968 | orchestrator | "kraken", 2025-07-04 18:40:00.011974 | orchestrator | "luminous", 2025-07-04 18:40:00.011980 | orchestrator | "mimic", 2025-07-04 18:40:00.011986 | orchestrator | "osdmap-prune", 2025-07-04 18:40:00.011993 | orchestrator | "nautilus", 2025-07-04 18:40:00.012058 | orchestrator | "octopus", 2025-07-04 18:40:00.012065 | orchestrator | "pacific", 2025-07-04 18:40:00.012072 | orchestrator | "elector-pinging", 2025-07-04 18:40:00.012079 | orchestrator | "quincy", 2025-07-04 18:40:00.012085 | orchestrator | "reef" 2025-07-04 18:40:00.012092 | orchestrator | ], 2025-07-04 18:40:00.012100 | orchestrator | "optional": [] 2025-07-04 18:40:00.012108 | orchestrator | }, 2025-07-04 18:40:00.012116 | orchestrator | "mons": [ 2025-07-04 18:40:00.012123 | orchestrator | { 2025-07-04 18:40:00.012131 | orchestrator | "rank": 0, 2025-07-04 18:40:00.012138 | orchestrator | "name": "testbed-node-0", 2025-07-04 18:40:00.012146 | orchestrator | "public_addrs": { 2025-07-04 18:40:00.012154 | orchestrator | "addrvec": [ 2025-07-04 18:40:00.012161 | orchestrator | { 2025-07-04 18:40:00.012167 | orchestrator | "type": "v2", 2025-07-04 18:40:00.012175 | orchestrator | "addr": "192.168.16.10:3300", 2025-07-04 18:40:00.012182 | orchestrator | "nonce": 0 2025-07-04 18:40:00.012189 | orchestrator | }, 2025-07-04 18:40:00.012196 | orchestrator | { 2025-07-04 18:40:00.012202 | orchestrator | "type": "v1", 2025-07-04 18:40:00.012210 | orchestrator | "addr": "192.168.16.10:6789", 2025-07-04 18:40:00.012217 | orchestrator | "nonce": 0 2025-07-04 18:40:00.012250 | orchestrator | } 2025-07-04 18:40:00.012257 | orchestrator | ] 2025-07-04 18:40:00.012265 | orchestrator | }, 2025-07-04 18:40:00.012272 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-07-04 18:40:00.012280 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-07-04 18:40:00.012287 | orchestrator | "priority": 0, 2025-07-04 18:40:00.012295 | orchestrator | "weight": 0, 2025-07-04 18:40:00.012302 | orchestrator | "crush_location": "{}" 2025-07-04 18:40:00.012309 | orchestrator | }, 2025-07-04 18:40:00.012316 | orchestrator | { 2025-07-04 18:40:00.012323 | orchestrator | "rank": 1, 2025-07-04 18:40:00.012330 | orchestrator | "name": "testbed-node-1", 2025-07-04 18:40:00.012337 | orchestrator | "public_addrs": { 2025-07-04 18:40:00.012345 | orchestrator | "addrvec": [ 2025-07-04 18:40:00.012352 | orchestrator | { 2025-07-04 18:40:00.012360 | orchestrator | "type": "v2", 2025-07-04 18:40:00.012367 | orchestrator | "addr": "192.168.16.11:3300", 2025-07-04 18:40:00.012374 | orchestrator | "nonce": 0 2025-07-04 18:40:00.012382 | orchestrator | }, 2025-07-04 18:40:00.012389 | orchestrator | { 2025-07-04 18:40:00.012397 | orchestrator | "type": "v1", 2025-07-04 18:40:00.012405 | orchestrator | "addr": "192.168.16.11:6789", 2025-07-04 18:40:00.012411 | orchestrator | "nonce": 0 2025-07-04 18:40:00.012418 | orchestrator | } 2025-07-04 18:40:00.012425 | orchestrator | ] 2025-07-04 18:40:00.012431 | orchestrator | }, 2025-07-04 18:40:00.012438 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-07-04 18:40:00.012444 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-07-04 18:40:00.012451 | orchestrator | "priority": 0, 2025-07-04 18:40:00.012458 | orchestrator | "weight": 0, 2025-07-04 18:40:00.012465 | orchestrator | "crush_location": "{}" 2025-07-04 18:40:00.012471 | orchestrator | }, 2025-07-04 18:40:00.012478 | orchestrator | { 2025-07-04 18:40:00.012483 | orchestrator | "rank": 2, 2025-07-04 18:40:00.012490 | orchestrator | "name": "testbed-node-2", 2025-07-04 18:40:00.012496 | orchestrator | "public_addrs": { 2025-07-04 18:40:00.012503 | orchestrator | "addrvec": [ 2025-07-04 18:40:00.012508 | orchestrator | { 2025-07-04 18:40:00.012514 | orchestrator | "type": "v2", 2025-07-04 18:40:00.012519 | orchestrator | "addr": "192.168.16.12:3300", 2025-07-04 18:40:00.012525 | orchestrator | "nonce": 0 2025-07-04 18:40:00.012531 | orchestrator | }, 2025-07-04 18:40:00.012537 | orchestrator | { 2025-07-04 18:40:00.012543 | orchestrator | "type": "v1", 2025-07-04 18:40:00.012550 | orchestrator | "addr": "192.168.16.12:6789", 2025-07-04 18:40:00.012556 | orchestrator | "nonce": 0 2025-07-04 18:40:00.012562 | orchestrator | } 2025-07-04 18:40:00.012569 | orchestrator | ] 2025-07-04 18:40:00.012575 | orchestrator | }, 2025-07-04 18:40:00.012582 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-07-04 18:40:00.012587 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-07-04 18:40:00.012594 | orchestrator | "priority": 0, 2025-07-04 18:40:00.012599 | orchestrator | "weight": 0, 2025-07-04 18:40:00.012606 | orchestrator | "crush_location": "{}" 2025-07-04 18:40:00.012612 | orchestrator | } 2025-07-04 18:40:00.012618 | orchestrator | ] 2025-07-04 18:40:00.012623 | orchestrator | } 2025-07-04 18:40:00.012629 | orchestrator | } 2025-07-04 18:40:00.013475 | orchestrator | 2025-07-04 18:40:00.013515 | orchestrator | # Ceph free space status 2025-07-04 18:40:00.013524 | orchestrator | 2025-07-04 18:40:00.013531 | orchestrator | + echo 2025-07-04 18:40:00.013538 | orchestrator | + echo '# Ceph free space status' 2025-07-04 18:40:00.013545 | orchestrator | + echo 2025-07-04 18:40:00.013552 | orchestrator | + ceph df 2025-07-04 18:40:00.608837 | orchestrator | --- RAW STORAGE --- 2025-07-04 18:40:00.608948 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-07-04 18:40:00.608980 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-04 18:40:00.608992 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-04 18:40:00.609004 | orchestrator | 2025-07-04 18:40:00.609016 | orchestrator | --- POOLS --- 2025-07-04 18:40:00.609028 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-07-04 18:40:00.609041 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-07-04 18:40:00.609052 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-07-04 18:40:00.609079 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-07-04 18:40:00.609114 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-07-04 18:40:00.609125 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-07-04 18:40:00.609136 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-07-04 18:40:00.609147 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-07-04 18:40:00.609158 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-07-04 18:40:00.609169 | orchestrator | .rgw.root 9 32 1.8 KiB 5 40 KiB 0 53 GiB 2025-07-04 18:40:00.609180 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-07-04 18:40:00.609191 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-07-04 18:40:00.609202 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.93 35 GiB 2025-07-04 18:40:00.609212 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-07-04 18:40:00.609223 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-07-04 18:40:00.651842 | orchestrator | ++ semver 9.1.0 5.0.0 2025-07-04 18:40:00.711104 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-04 18:40:00.711218 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-07-04 18:40:00.711245 | orchestrator | + osism apply facts 2025-07-04 18:40:02.504937 | orchestrator | Registering Redlock._acquired_script 2025-07-04 18:40:02.505043 | orchestrator | Registering Redlock._extend_script 2025-07-04 18:40:02.505060 | orchestrator | Registering Redlock._release_script 2025-07-04 18:40:02.578532 | orchestrator | 2025-07-04 18:40:02 | INFO  | Task 74ae05fd-a144-428a-bb20-8454f436d428 (facts) was prepared for execution. 2025-07-04 18:40:02.578627 | orchestrator | 2025-07-04 18:40:02 | INFO  | It takes a moment until task 74ae05fd-a144-428a-bb20-8454f436d428 (facts) has been started and output is visible here. 2025-07-04 18:40:06.763655 | orchestrator | 2025-07-04 18:40:06.767420 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-04 18:40:06.768545 | orchestrator | 2025-07-04 18:40:06.769805 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-04 18:40:06.770357 | orchestrator | Friday 04 July 2025 18:40:06 +0000 (0:00:00.249) 0:00:00.249 *********** 2025-07-04 18:40:08.174225 | orchestrator | ok: [testbed-manager] 2025-07-04 18:40:08.174343 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:08.174369 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:40:08.174388 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:40:08.174993 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:40:08.176558 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:40:08.176612 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:40:08.177818 | orchestrator | 2025-07-04 18:40:08.179031 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-04 18:40:08.180109 | orchestrator | Friday 04 July 2025 18:40:08 +0000 (0:00:01.408) 0:00:01.658 *********** 2025-07-04 18:40:08.323751 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:40:08.419863 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:08.497889 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:40:08.573414 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:40:08.658830 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:40:09.339833 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:40:09.342000 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:40:09.343610 | orchestrator | 2025-07-04 18:40:09.344493 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-04 18:40:09.345487 | orchestrator | 2025-07-04 18:40:09.346542 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-04 18:40:09.346952 | orchestrator | Friday 04 July 2025 18:40:09 +0000 (0:00:01.169) 0:00:02.827 *********** 2025-07-04 18:40:14.638336 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:40:14.640351 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:40:14.642704 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:14.645025 | orchestrator | ok: [testbed-manager] 2025-07-04 18:40:14.645131 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:40:14.645152 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:40:14.647052 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:40:14.651855 | orchestrator | 2025-07-04 18:40:14.651909 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-04 18:40:14.652922 | orchestrator | 2025-07-04 18:40:14.653907 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-04 18:40:14.654610 | orchestrator | Friday 04 July 2025 18:40:14 +0000 (0:00:05.298) 0:00:08.126 *********** 2025-07-04 18:40:14.817157 | orchestrator | skipping: [testbed-manager] 2025-07-04 18:40:14.900992 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:15.030864 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:40:15.154367 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:40:15.244349 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:40:15.293054 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:40:15.293575 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:40:15.294011 | orchestrator | 2025-07-04 18:40:15.294642 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:40:15.295201 | orchestrator | 2025-07-04 18:40:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 18:40:15.295224 | orchestrator | 2025-07-04 18:40:15 | INFO  | Please wait and do not abort execution. 2025-07-04 18:40:15.295619 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:40:15.296158 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:40:15.297541 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:40:15.297723 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:40:15.298745 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:40:15.299552 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:40:15.300493 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:40:15.300856 | orchestrator | 2025-07-04 18:40:15.301601 | orchestrator | 2025-07-04 18:40:15.302489 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:40:15.302892 | orchestrator | Friday 04 July 2025 18:40:15 +0000 (0:00:00.656) 0:00:08.782 *********** 2025-07-04 18:40:15.303165 | orchestrator | =============================================================================== 2025-07-04 18:40:15.303597 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.30s 2025-07-04 18:40:15.304165 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.41s 2025-07-04 18:40:15.304430 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.17s 2025-07-04 18:40:15.304859 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.66s 2025-07-04 18:40:16.029242 | orchestrator | + osism validate ceph-mons 2025-07-04 18:40:17.763815 | orchestrator | Registering Redlock._acquired_script 2025-07-04 18:40:17.763861 | orchestrator | Registering Redlock._extend_script 2025-07-04 18:40:17.763867 | orchestrator | Registering Redlock._release_script 2025-07-04 18:40:37.943612 | orchestrator | 2025-07-04 18:40:37.943717 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-07-04 18:40:37.943725 | orchestrator | 2025-07-04 18:40:37.943730 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-04 18:40:37.943753 | orchestrator | Friday 04 July 2025 18:40:22 +0000 (0:00:00.433) 0:00:00.433 *********** 2025-07-04 18:40:37.943758 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:40:37.943763 | orchestrator | 2025-07-04 18:40:37.943767 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-04 18:40:37.943771 | orchestrator | Friday 04 July 2025 18:40:23 +0000 (0:00:00.682) 0:00:01.116 *********** 2025-07-04 18:40:37.943775 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:40:37.943778 | orchestrator | 2025-07-04 18:40:37.943782 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-04 18:40:37.943786 | orchestrator | Friday 04 July 2025 18:40:23 +0000 (0:00:00.933) 0:00:02.049 *********** 2025-07-04 18:40:37.943790 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.943796 | orchestrator | 2025-07-04 18:40:37.943800 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-04 18:40:37.943804 | orchestrator | Friday 04 July 2025 18:40:24 +0000 (0:00:00.248) 0:00:02.298 *********** 2025-07-04 18:40:37.943808 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.943812 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:40:37.943816 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:40:37.943820 | orchestrator | 2025-07-04 18:40:37.943824 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-04 18:40:37.943828 | orchestrator | Friday 04 July 2025 18:40:24 +0000 (0:00:00.345) 0:00:02.644 *********** 2025-07-04 18:40:37.943832 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:40:37.943835 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.943839 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:40:37.943843 | orchestrator | 2025-07-04 18:40:37.943878 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-04 18:40:37.943882 | orchestrator | Friday 04 July 2025 18:40:25 +0000 (0:00:00.966) 0:00:03.611 *********** 2025-07-04 18:40:37.943886 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:37.943891 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:40:37.943895 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:40:37.943899 | orchestrator | 2025-07-04 18:40:37.943903 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-04 18:40:37.943907 | orchestrator | Friday 04 July 2025 18:40:25 +0000 (0:00:00.286) 0:00:03.897 *********** 2025-07-04 18:40:37.943911 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.943915 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:40:37.943919 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:40:37.943922 | orchestrator | 2025-07-04 18:40:37.943926 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-04 18:40:37.943930 | orchestrator | Friday 04 July 2025 18:40:26 +0000 (0:00:00.540) 0:00:04.438 *********** 2025-07-04 18:40:37.943934 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.943937 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:40:37.943941 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:40:37.943945 | orchestrator | 2025-07-04 18:40:37.943948 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-07-04 18:40:37.943952 | orchestrator | Friday 04 July 2025 18:40:26 +0000 (0:00:00.333) 0:00:04.771 *********** 2025-07-04 18:40:37.943956 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:37.943960 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:40:37.943964 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:40:37.943967 | orchestrator | 2025-07-04 18:40:37.943971 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-07-04 18:40:37.943975 | orchestrator | Friday 04 July 2025 18:40:26 +0000 (0:00:00.291) 0:00:05.062 *********** 2025-07-04 18:40:37.943978 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.943982 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:40:37.943986 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:40:37.943989 | orchestrator | 2025-07-04 18:40:37.943993 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-04 18:40:37.944002 | orchestrator | Friday 04 July 2025 18:40:27 +0000 (0:00:00.304) 0:00:05.367 *********** 2025-07-04 18:40:37.944006 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:37.944010 | orchestrator | 2025-07-04 18:40:37.944014 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-04 18:40:37.944017 | orchestrator | Friday 04 July 2025 18:40:27 +0000 (0:00:00.680) 0:00:06.047 *********** 2025-07-04 18:40:37.944021 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:37.944025 | orchestrator | 2025-07-04 18:40:37.944029 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-04 18:40:37.944032 | orchestrator | Friday 04 July 2025 18:40:28 +0000 (0:00:00.252) 0:00:06.299 *********** 2025-07-04 18:40:37.944036 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:37.944040 | orchestrator | 2025-07-04 18:40:37.944044 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:40:37.944048 | orchestrator | Friday 04 July 2025 18:40:28 +0000 (0:00:00.306) 0:00:06.606 *********** 2025-07-04 18:40:37.944052 | orchestrator | 2025-07-04 18:40:37.944055 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:40:37.944059 | orchestrator | Friday 04 July 2025 18:40:28 +0000 (0:00:00.067) 0:00:06.674 *********** 2025-07-04 18:40:37.944063 | orchestrator | 2025-07-04 18:40:37.944066 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:40:37.944070 | orchestrator | Friday 04 July 2025 18:40:28 +0000 (0:00:00.071) 0:00:06.746 *********** 2025-07-04 18:40:37.944074 | orchestrator | 2025-07-04 18:40:37.944095 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-04 18:40:37.944099 | orchestrator | Friday 04 July 2025 18:40:28 +0000 (0:00:00.074) 0:00:06.820 *********** 2025-07-04 18:40:37.944103 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:37.944107 | orchestrator | 2025-07-04 18:40:37.944110 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-04 18:40:37.944114 | orchestrator | Friday 04 July 2025 18:40:28 +0000 (0:00:00.256) 0:00:07.076 *********** 2025-07-04 18:40:37.944118 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:37.944122 | orchestrator | 2025-07-04 18:40:37.944138 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-07-04 18:40:37.944143 | orchestrator | Friday 04 July 2025 18:40:29 +0000 (0:00:00.258) 0:00:07.335 *********** 2025-07-04 18:40:37.944146 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.944150 | orchestrator | 2025-07-04 18:40:37.944154 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-07-04 18:40:37.944158 | orchestrator | Friday 04 July 2025 18:40:29 +0000 (0:00:00.106) 0:00:07.442 *********** 2025-07-04 18:40:37.944163 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:40:37.944167 | orchestrator | 2025-07-04 18:40:37.944171 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-07-04 18:40:37.944175 | orchestrator | Friday 04 July 2025 18:40:30 +0000 (0:00:01.597) 0:00:09.039 *********** 2025-07-04 18:40:37.944180 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.944184 | orchestrator | 2025-07-04 18:40:37.944188 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-07-04 18:40:37.944193 | orchestrator | Friday 04 July 2025 18:40:31 +0000 (0:00:00.316) 0:00:09.356 *********** 2025-07-04 18:40:37.944200 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:37.944205 | orchestrator | 2025-07-04 18:40:37.944209 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-07-04 18:40:37.944213 | orchestrator | Friday 04 July 2025 18:40:31 +0000 (0:00:00.344) 0:00:09.701 *********** 2025-07-04 18:40:37.944218 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.944222 | orchestrator | 2025-07-04 18:40:37.944226 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-07-04 18:40:37.944230 | orchestrator | Friday 04 July 2025 18:40:31 +0000 (0:00:00.327) 0:00:10.028 *********** 2025-07-04 18:40:37.944235 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.944243 | orchestrator | 2025-07-04 18:40:37.944247 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-07-04 18:40:37.944252 | orchestrator | Friday 04 July 2025 18:40:32 +0000 (0:00:00.311) 0:00:10.340 *********** 2025-07-04 18:40:37.944256 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:37.944260 | orchestrator | 2025-07-04 18:40:37.944265 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-07-04 18:40:37.944269 | orchestrator | Friday 04 July 2025 18:40:32 +0000 (0:00:00.149) 0:00:10.489 *********** 2025-07-04 18:40:37.944273 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.944277 | orchestrator | 2025-07-04 18:40:37.944282 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-07-04 18:40:37.944286 | orchestrator | Friday 04 July 2025 18:40:32 +0000 (0:00:00.125) 0:00:10.615 *********** 2025-07-04 18:40:37.944290 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.944294 | orchestrator | 2025-07-04 18:40:37.944299 | orchestrator | TASK [Gather status data] ****************************************************** 2025-07-04 18:40:37.944303 | orchestrator | Friday 04 July 2025 18:40:32 +0000 (0:00:00.122) 0:00:10.737 *********** 2025-07-04 18:40:37.944307 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:40:37.944311 | orchestrator | 2025-07-04 18:40:37.944316 | orchestrator | TASK [Set health test data] **************************************************** 2025-07-04 18:40:37.944320 | orchestrator | Friday 04 July 2025 18:40:34 +0000 (0:00:01.402) 0:00:12.140 *********** 2025-07-04 18:40:37.944324 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.944328 | orchestrator | 2025-07-04 18:40:37.944332 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-07-04 18:40:37.944336 | orchestrator | Friday 04 July 2025 18:40:34 +0000 (0:00:00.299) 0:00:12.440 *********** 2025-07-04 18:40:37.944341 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:37.944345 | orchestrator | 2025-07-04 18:40:37.944349 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-07-04 18:40:37.944353 | orchestrator | Friday 04 July 2025 18:40:34 +0000 (0:00:00.140) 0:00:12.581 *********** 2025-07-04 18:40:37.944358 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:40:37.944362 | orchestrator | 2025-07-04 18:40:37.944366 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-07-04 18:40:37.944370 | orchestrator | Friday 04 July 2025 18:40:34 +0000 (0:00:00.145) 0:00:12.726 *********** 2025-07-04 18:40:37.944375 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:37.944379 | orchestrator | 2025-07-04 18:40:37.944383 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-07-04 18:40:37.944387 | orchestrator | Friday 04 July 2025 18:40:34 +0000 (0:00:00.128) 0:00:12.855 *********** 2025-07-04 18:40:37.944392 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:37.944396 | orchestrator | 2025-07-04 18:40:37.944400 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-04 18:40:37.944404 | orchestrator | Friday 04 July 2025 18:40:35 +0000 (0:00:00.346) 0:00:13.201 *********** 2025-07-04 18:40:37.944409 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:40:37.944413 | orchestrator | 2025-07-04 18:40:37.944417 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-04 18:40:37.944422 | orchestrator | Friday 04 July 2025 18:40:35 +0000 (0:00:00.268) 0:00:13.469 *********** 2025-07-04 18:40:37.944426 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:40:37.944430 | orchestrator | 2025-07-04 18:40:37.944434 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-04 18:40:37.944439 | orchestrator | Friday 04 July 2025 18:40:35 +0000 (0:00:00.257) 0:00:13.727 *********** 2025-07-04 18:40:37.944443 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:40:37.944447 | orchestrator | 2025-07-04 18:40:37.944451 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-04 18:40:37.944456 | orchestrator | Friday 04 July 2025 18:40:37 +0000 (0:00:01.570) 0:00:15.297 *********** 2025-07-04 18:40:37.944463 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:40:37.944468 | orchestrator | 2025-07-04 18:40:37.944472 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-04 18:40:37.944476 | orchestrator | Friday 04 July 2025 18:40:37 +0000 (0:00:00.271) 0:00:15.568 *********** 2025-07-04 18:40:37.944480 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:40:37.944485 | orchestrator | 2025-07-04 18:40:37.944492 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:40:40.433298 | orchestrator | Friday 04 July 2025 18:40:37 +0000 (0:00:00.254) 0:00:15.823 *********** 2025-07-04 18:40:40.433403 | orchestrator | 2025-07-04 18:40:40.433416 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:40:40.433426 | orchestrator | Friday 04 July 2025 18:40:37 +0000 (0:00:00.074) 0:00:15.897 *********** 2025-07-04 18:40:40.433435 | orchestrator | 2025-07-04 18:40:40.433445 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:40:40.433454 | orchestrator | Friday 04 July 2025 18:40:37 +0000 (0:00:00.069) 0:00:15.966 *********** 2025-07-04 18:40:40.433463 | orchestrator | 2025-07-04 18:40:40.433473 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-04 18:40:40.433482 | orchestrator | Friday 04 July 2025 18:40:37 +0000 (0:00:00.072) 0:00:16.038 *********** 2025-07-04 18:40:40.433492 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:40:40.433501 | orchestrator | 2025-07-04 18:40:40.433527 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-04 18:40:40.433537 | orchestrator | Friday 04 July 2025 18:40:39 +0000 (0:00:01.543) 0:00:17.582 *********** 2025-07-04 18:40:40.433546 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-04 18:40:40.433555 | orchestrator |  "msg": [ 2025-07-04 18:40:40.433566 | orchestrator |  "Validator run completed.", 2025-07-04 18:40:40.433576 | orchestrator |  "You can find the report file here:", 2025-07-04 18:40:40.433586 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-07-04T18:40:22+00:00-report.json", 2025-07-04 18:40:40.433596 | orchestrator |  "on the following host:", 2025-07-04 18:40:40.433606 | orchestrator |  "testbed-manager" 2025-07-04 18:40:40.433615 | orchestrator |  ] 2025-07-04 18:40:40.433624 | orchestrator | } 2025-07-04 18:40:40.433634 | orchestrator | 2025-07-04 18:40:40.433647 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:40:40.433657 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-04 18:40:40.433668 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:40:40.433677 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:40:40.433686 | orchestrator | 2025-07-04 18:40:40.433695 | orchestrator | 2025-07-04 18:40:40.433704 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:40:40.433713 | orchestrator | Friday 04 July 2025 18:40:40 +0000 (0:00:00.606) 0:00:18.188 *********** 2025-07-04 18:40:40.433722 | orchestrator | =============================================================================== 2025-07-04 18:40:40.433731 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.60s 2025-07-04 18:40:40.433740 | orchestrator | Aggregate test results step one ----------------------------------------- 1.57s 2025-07-04 18:40:40.433749 | orchestrator | Write report file ------------------------------------------------------- 1.54s 2025-07-04 18:40:40.433758 | orchestrator | Gather status data ------------------------------------------------------ 1.40s 2025-07-04 18:40:40.433767 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-07-04 18:40:40.433795 | orchestrator | Create report output directory ------------------------------------------ 0.93s 2025-07-04 18:40:40.433804 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-07-04 18:40:40.433813 | orchestrator | Aggregate test results step one ----------------------------------------- 0.68s 2025-07-04 18:40:40.433822 | orchestrator | Print report file information ------------------------------------------- 0.61s 2025-07-04 18:40:40.433830 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2025-07-04 18:40:40.433839 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.35s 2025-07-04 18:40:40.433848 | orchestrator | Prepare test data for container existance test -------------------------- 0.35s 2025-07-04 18:40:40.433884 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.34s 2025-07-04 18:40:40.433893 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-07-04 18:40:40.433902 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-07-04 18:40:40.433910 | orchestrator | Set quorum test data ---------------------------------------------------- 0.32s 2025-07-04 18:40:40.433919 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2025-07-04 18:40:40.433927 | orchestrator | Aggregate test results step three --------------------------------------- 0.31s 2025-07-04 18:40:40.433936 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.30s 2025-07-04 18:40:40.433944 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2025-07-04 18:40:40.705478 | orchestrator | + osism validate ceph-mgrs 2025-07-04 18:40:42.475461 | orchestrator | Registering Redlock._acquired_script 2025-07-04 18:40:42.475584 | orchestrator | Registering Redlock._extend_script 2025-07-04 18:40:42.475599 | orchestrator | Registering Redlock._release_script 2025-07-04 18:41:02.205131 | orchestrator | 2025-07-04 18:41:02.205239 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-07-04 18:41:02.205251 | orchestrator | 2025-07-04 18:41:02.205257 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-04 18:41:02.205265 | orchestrator | Friday 04 July 2025 18:40:47 +0000 (0:00:00.457) 0:00:00.457 *********** 2025-07-04 18:41:02.205273 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:02.205279 | orchestrator | 2025-07-04 18:41:02.205285 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-04 18:41:02.205291 | orchestrator | Friday 04 July 2025 18:40:47 +0000 (0:00:00.670) 0:00:01.128 *********** 2025-07-04 18:41:02.205297 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:02.205303 | orchestrator | 2025-07-04 18:41:02.205310 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-04 18:41:02.205317 | orchestrator | Friday 04 July 2025 18:40:48 +0000 (0:00:00.848) 0:00:01.977 *********** 2025-07-04 18:41:02.205323 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:41:02.205331 | orchestrator | 2025-07-04 18:41:02.205337 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-04 18:41:02.205343 | orchestrator | Friday 04 July 2025 18:40:49 +0000 (0:00:00.261) 0:00:02.239 *********** 2025-07-04 18:41:02.205350 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:41:02.205356 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:41:02.205362 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:41:02.205368 | orchestrator | 2025-07-04 18:41:02.205375 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-04 18:41:02.205382 | orchestrator | Friday 04 July 2025 18:40:49 +0000 (0:00:00.309) 0:00:02.549 *********** 2025-07-04 18:41:02.205388 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:41:02.205395 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:41:02.205401 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:41:02.205408 | orchestrator | 2025-07-04 18:41:02.205415 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-04 18:41:02.205422 | orchestrator | Friday 04 July 2025 18:40:50 +0000 (0:00:00.984) 0:00:03.533 *********** 2025-07-04 18:41:02.205450 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:41:02.205456 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:41:02.205462 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:41:02.205468 | orchestrator | 2025-07-04 18:41:02.205475 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-04 18:41:02.205481 | orchestrator | Friday 04 July 2025 18:40:50 +0000 (0:00:00.304) 0:00:03.838 *********** 2025-07-04 18:41:02.205487 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:41:02.205493 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:41:02.205498 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:41:02.205503 | orchestrator | 2025-07-04 18:41:02.205510 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-04 18:41:02.205516 | orchestrator | Friday 04 July 2025 18:40:51 +0000 (0:00:00.532) 0:00:04.370 *********** 2025-07-04 18:41:02.205521 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:41:02.205527 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:41:02.205550 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:41:02.205557 | orchestrator | 2025-07-04 18:41:02.205563 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-07-04 18:41:02.205570 | orchestrator | Friday 04 July 2025 18:40:51 +0000 (0:00:00.276) 0:00:04.646 *********** 2025-07-04 18:41:02.205576 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:41:02.205582 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:41:02.205588 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:41:02.205594 | orchestrator | 2025-07-04 18:41:02.205600 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-07-04 18:41:02.205606 | orchestrator | Friday 04 July 2025 18:40:51 +0000 (0:00:00.264) 0:00:04.910 *********** 2025-07-04 18:41:02.205612 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:41:02.205618 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:41:02.205624 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:41:02.205630 | orchestrator | 2025-07-04 18:41:02.205636 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-04 18:41:02.205642 | orchestrator | Friday 04 July 2025 18:40:51 +0000 (0:00:00.276) 0:00:05.186 *********** 2025-07-04 18:41:02.205648 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:41:02.205654 | orchestrator | 2025-07-04 18:41:02.205660 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-04 18:41:02.205667 | orchestrator | Friday 04 July 2025 18:40:52 +0000 (0:00:00.704) 0:00:05.891 *********** 2025-07-04 18:41:02.205674 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:41:02.205681 | orchestrator | 2025-07-04 18:41:02.205689 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-04 18:41:02.205695 | orchestrator | Friday 04 July 2025 18:40:52 +0000 (0:00:00.256) 0:00:06.147 *********** 2025-07-04 18:41:02.205701 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:41:02.205707 | orchestrator | 2025-07-04 18:41:02.205713 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:41:02.205719 | orchestrator | Friday 04 July 2025 18:40:53 +0000 (0:00:00.260) 0:00:06.407 *********** 2025-07-04 18:41:02.205726 | orchestrator | 2025-07-04 18:41:02.205732 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:41:02.205739 | orchestrator | Friday 04 July 2025 18:40:53 +0000 (0:00:00.081) 0:00:06.488 *********** 2025-07-04 18:41:02.205746 | orchestrator | 2025-07-04 18:41:02.205753 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:41:02.205759 | orchestrator | Friday 04 July 2025 18:40:53 +0000 (0:00:00.071) 0:00:06.560 *********** 2025-07-04 18:41:02.205765 | orchestrator | 2025-07-04 18:41:02.205771 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-04 18:41:02.205777 | orchestrator | Friday 04 July 2025 18:40:53 +0000 (0:00:00.077) 0:00:06.637 *********** 2025-07-04 18:41:02.205784 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:41:02.205790 | orchestrator | 2025-07-04 18:41:02.205796 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-04 18:41:02.205811 | orchestrator | Friday 04 July 2025 18:40:53 +0000 (0:00:00.251) 0:00:06.889 *********** 2025-07-04 18:41:02.205818 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:41:02.205825 | orchestrator | 2025-07-04 18:41:02.205852 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-07-04 18:41:02.205859 | orchestrator | Friday 04 July 2025 18:40:53 +0000 (0:00:00.240) 0:00:07.130 *********** 2025-07-04 18:41:02.205865 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:41:02.205872 | orchestrator | 2025-07-04 18:41:02.205878 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-07-04 18:41:02.205885 | orchestrator | Friday 04 July 2025 18:40:54 +0000 (0:00:00.123) 0:00:07.253 *********** 2025-07-04 18:41:02.205892 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:41:02.205922 | orchestrator | 2025-07-04 18:41:02.205929 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-07-04 18:41:02.205935 | orchestrator | Friday 04 July 2025 18:40:56 +0000 (0:00:01.981) 0:00:09.235 *********** 2025-07-04 18:41:02.205941 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:41:02.205947 | orchestrator | 2025-07-04 18:41:02.205954 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-07-04 18:41:02.205961 | orchestrator | Friday 04 July 2025 18:40:56 +0000 (0:00:00.252) 0:00:09.487 *********** 2025-07-04 18:41:02.205968 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:41:02.205974 | orchestrator | 2025-07-04 18:41:02.205981 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-07-04 18:41:02.205988 | orchestrator | Friday 04 July 2025 18:40:57 +0000 (0:00:00.817) 0:00:10.305 *********** 2025-07-04 18:41:02.205994 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:41:02.206000 | orchestrator | 2025-07-04 18:41:02.206012 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-07-04 18:41:02.206073 | orchestrator | Friday 04 July 2025 18:40:57 +0000 (0:00:00.132) 0:00:10.438 *********** 2025-07-04 18:41:02.206079 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:41:02.206085 | orchestrator | 2025-07-04 18:41:02.206090 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-04 18:41:02.206096 | orchestrator | Friday 04 July 2025 18:40:57 +0000 (0:00:00.156) 0:00:10.594 *********** 2025-07-04 18:41:02.206102 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:02.206108 | orchestrator | 2025-07-04 18:41:02.206114 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-04 18:41:02.206119 | orchestrator | Friday 04 July 2025 18:40:57 +0000 (0:00:00.264) 0:00:10.859 *********** 2025-07-04 18:41:02.206126 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:41:02.206132 | orchestrator | 2025-07-04 18:41:02.206138 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-04 18:41:02.206144 | orchestrator | Friday 04 July 2025 18:40:57 +0000 (0:00:00.261) 0:00:11.120 *********** 2025-07-04 18:41:02.206151 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:02.206157 | orchestrator | 2025-07-04 18:41:02.206163 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-04 18:41:02.206169 | orchestrator | Friday 04 July 2025 18:40:59 +0000 (0:00:01.291) 0:00:12.412 *********** 2025-07-04 18:41:02.206174 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:02.206180 | orchestrator | 2025-07-04 18:41:02.206186 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-04 18:41:02.206192 | orchestrator | Friday 04 July 2025 18:40:59 +0000 (0:00:00.283) 0:00:12.695 *********** 2025-07-04 18:41:02.206197 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:02.206203 | orchestrator | 2025-07-04 18:41:02.206209 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:41:02.206216 | orchestrator | Friday 04 July 2025 18:40:59 +0000 (0:00:00.270) 0:00:12.966 *********** 2025-07-04 18:41:02.206222 | orchestrator | 2025-07-04 18:41:02.206235 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:41:02.206240 | orchestrator | Friday 04 July 2025 18:40:59 +0000 (0:00:00.069) 0:00:13.035 *********** 2025-07-04 18:41:02.206246 | orchestrator | 2025-07-04 18:41:02.206251 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:41:02.206257 | orchestrator | Friday 04 July 2025 18:40:59 +0000 (0:00:00.076) 0:00:13.111 *********** 2025-07-04 18:41:02.206263 | orchestrator | 2025-07-04 18:41:02.206269 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-04 18:41:02.206275 | orchestrator | Friday 04 July 2025 18:40:59 +0000 (0:00:00.073) 0:00:13.185 *********** 2025-07-04 18:41:02.206282 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:02.206289 | orchestrator | 2025-07-04 18:41:02.206294 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-04 18:41:02.206300 | orchestrator | Friday 04 July 2025 18:41:01 +0000 (0:00:01.779) 0:00:14.965 *********** 2025-07-04 18:41:02.206305 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-04 18:41:02.206311 | orchestrator |  "msg": [ 2025-07-04 18:41:02.206318 | orchestrator |  "Validator run completed.", 2025-07-04 18:41:02.206325 | orchestrator |  "You can find the report file here:", 2025-07-04 18:41:02.206331 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-07-04T18:40:47+00:00-report.json", 2025-07-04 18:41:02.206339 | orchestrator |  "on the following host:", 2025-07-04 18:41:02.206346 | orchestrator |  "testbed-manager" 2025-07-04 18:41:02.206352 | orchestrator |  ] 2025-07-04 18:41:02.206358 | orchestrator | } 2025-07-04 18:41:02.206364 | orchestrator | 2025-07-04 18:41:02.206369 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:41:02.206378 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-04 18:41:02.206386 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:41:02.206403 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:41:02.526772 | orchestrator | 2025-07-04 18:41:02.526848 | orchestrator | 2025-07-04 18:41:02.526855 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:41:02.526861 | orchestrator | Friday 04 July 2025 18:41:02 +0000 (0:00:00.416) 0:00:15.382 *********** 2025-07-04 18:41:02.526866 | orchestrator | =============================================================================== 2025-07-04 18:41:02.526870 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.98s 2025-07-04 18:41:02.526874 | orchestrator | Write report file ------------------------------------------------------- 1.78s 2025-07-04 18:41:02.526878 | orchestrator | Aggregate test results step one ----------------------------------------- 1.29s 2025-07-04 18:41:02.526882 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2025-07-04 18:41:02.526886 | orchestrator | Create report output directory ------------------------------------------ 0.85s 2025-07-04 18:41:02.526890 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.82s 2025-07-04 18:41:02.526894 | orchestrator | Aggregate test results step one ----------------------------------------- 0.70s 2025-07-04 18:41:02.526919 | orchestrator | Get timestamp for report file ------------------------------------------- 0.67s 2025-07-04 18:41:02.526923 | orchestrator | Set test result to passed if container is existing ---------------------- 0.53s 2025-07-04 18:41:02.526945 | orchestrator | Print report file information ------------------------------------------- 0.42s 2025-07-04 18:41:02.526952 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-07-04 18:41:02.526958 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-07-04 18:41:02.526982 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2025-07-04 18:41:02.526987 | orchestrator | Prepare test data ------------------------------------------------------- 0.28s 2025-07-04 18:41:02.526991 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.28s 2025-07-04 18:41:02.526995 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2025-07-04 18:41:02.526999 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.26s 2025-07-04 18:41:02.527003 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.26s 2025-07-04 18:41:02.527007 | orchestrator | Define report vars ------------------------------------------------------ 0.26s 2025-07-04 18:41:02.527011 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2025-07-04 18:41:02.787094 | orchestrator | + osism validate ceph-osds 2025-07-04 18:41:04.549380 | orchestrator | Registering Redlock._acquired_script 2025-07-04 18:41:04.549485 | orchestrator | Registering Redlock._extend_script 2025-07-04 18:41:04.549501 | orchestrator | Registering Redlock._release_script 2025-07-04 18:41:13.460594 | orchestrator | 2025-07-04 18:41:13.460731 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-07-04 18:41:13.460748 | orchestrator | 2025-07-04 18:41:13.460760 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-04 18:41:13.460772 | orchestrator | Friday 04 July 2025 18:41:09 +0000 (0:00:00.438) 0:00:00.438 *********** 2025-07-04 18:41:13.460783 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:13.460795 | orchestrator | 2025-07-04 18:41:13.460806 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-04 18:41:13.460817 | orchestrator | Friday 04 July 2025 18:41:09 +0000 (0:00:00.657) 0:00:01.096 *********** 2025-07-04 18:41:13.460828 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:13.460839 | orchestrator | 2025-07-04 18:41:13.460850 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-04 18:41:13.460861 | orchestrator | Friday 04 July 2025 18:41:10 +0000 (0:00:00.400) 0:00:01.497 *********** 2025-07-04 18:41:13.460872 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:13.460883 | orchestrator | 2025-07-04 18:41:13.460894 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-04 18:41:13.460904 | orchestrator | Friday 04 July 2025 18:41:11 +0000 (0:00:00.944) 0:00:02.441 *********** 2025-07-04 18:41:13.460915 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:13.461005 | orchestrator | 2025-07-04 18:41:13.461018 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-04 18:41:13.461029 | orchestrator | Friday 04 July 2025 18:41:11 +0000 (0:00:00.137) 0:00:02.579 *********** 2025-07-04 18:41:13.461040 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:13.461051 | orchestrator | 2025-07-04 18:41:13.461062 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-04 18:41:13.461073 | orchestrator | Friday 04 July 2025 18:41:11 +0000 (0:00:00.136) 0:00:02.716 *********** 2025-07-04 18:41:13.461084 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:13.461096 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:41:13.461107 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:41:13.461118 | orchestrator | 2025-07-04 18:41:13.461131 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-04 18:41:13.461144 | orchestrator | Friday 04 July 2025 18:41:11 +0000 (0:00:00.336) 0:00:03.052 *********** 2025-07-04 18:41:13.461157 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:13.461169 | orchestrator | 2025-07-04 18:41:13.461181 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-04 18:41:13.461194 | orchestrator | Friday 04 July 2025 18:41:11 +0000 (0:00:00.154) 0:00:03.207 *********** 2025-07-04 18:41:13.461207 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:13.461220 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:13.461255 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:13.461269 | orchestrator | 2025-07-04 18:41:13.461282 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-07-04 18:41:13.461293 | orchestrator | Friday 04 July 2025 18:41:12 +0000 (0:00:00.323) 0:00:03.530 *********** 2025-07-04 18:41:13.461304 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:13.461315 | orchestrator | 2025-07-04 18:41:13.461326 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-04 18:41:13.461337 | orchestrator | Friday 04 July 2025 18:41:12 +0000 (0:00:00.577) 0:00:04.107 *********** 2025-07-04 18:41:13.461348 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:13.461358 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:13.461369 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:13.461380 | orchestrator | 2025-07-04 18:41:13.461390 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-07-04 18:41:13.461401 | orchestrator | Friday 04 July 2025 18:41:13 +0000 (0:00:00.504) 0:00:04.611 *********** 2025-07-04 18:41:13.461415 | orchestrator | skipping: [testbed-node-3] => (item={'id': '352aa30d2cfeea508ef0121115d3a4ad93958c03187dbd1195b2712aa24cc1de', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-04 18:41:13.461429 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a12b3e0dd8107ccb8523a3bafa5d892668e59d4a4af7e28db4d358091e00a74a', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-04 18:41:13.461441 | orchestrator | skipping: [testbed-node-3] => (item={'id': '723abdd1fcfaa0ec7e236d1c150fda3bb67140061a0a5a805e2a0cea17049f9c', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-07-04 18:41:13.461471 | orchestrator | skipping: [testbed-node-3] => (item={'id': '21d4ec621518159a2934b1361332ceb72d1a80ba1fc938209b25004d1ff05ec7', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-04 18:41:13.461483 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b91555f06958301778d5a067f8e964cb75f147d692a4655088ef3ed2d69e3c1a', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-04 18:41:13.461515 | orchestrator | skipping: [testbed-node-3] => (item={'id': '223d713ee474da7655d5498ce09c96b7c0ec8676b205e388ad1d1e76313484af', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-07-04 18:41:13.461538 | orchestrator | skipping: [testbed-node-3] => (item={'id': '258269a66bdc1ad776d515b7111c776208216db4cfbf90d46b5457b9f72295a9', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-07-04 18:41:13.461549 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6cbfc16133b87ded9ad30b1493a772928789f5b84888453bb398038de8c03332', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-04 18:41:13.461563 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9f197909e2e2b6f5081decf4902bbab90b488bd34a2913aee04d61bde2ef6f93', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-04 18:41:13.461574 | orchestrator | skipping: [testbed-node-3] => (item={'id': '52d9962a8324a89231690f3ca4b159cd36e64cbbf6cf9f8db73ecf0328c5d135', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-04 18:41:13.461593 | orchestrator | skipping: [testbed-node-3] => (item={'id': '093a43c20e2b374ff3a0e7e6e97171395cda057f9243731edd8d64dc98b09849', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-04 18:41:13.461605 | orchestrator | skipping: [testbed-node-3] => (item={'id': '41119e0320864d92061e622320a70523d1979c29aee09ed54bae44db24cf5628', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-04 18:41:13.461619 | orchestrator | ok: [testbed-node-3] => (item={'id': 'c0e054ee0ff4ec31567ed94ac8a1f02fd724249d865b3e568f6709f9ede11624', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-04 18:41:13.461631 | orchestrator | ok: [testbed-node-3] => (item={'id': 'fa7c7f49db46e80a1b37a6e63ca842d3a342dd8faed148e6eef2d71c3a3ded6d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-04 18:41:13.461642 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fee83777e2a5f13ac5a285e746f350ca6886f7d977133f364ed66d17e2adead9', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-04 18:41:13.461653 | orchestrator | skipping: [testbed-node-3] => (item={'id': '31cfc410639ea14396a0970a82389c856886e7078f41ad25753a21580bd1d7ec', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-07-04 18:41:13.461669 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8c223c0e858be386ab80a1874c4910dec041b97f7856e0bb02528b46900b97de', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-04 18:41:13.461681 | orchestrator | skipping: [testbed-node-3] => (item={'id': '388e8ab4295159e59ae8b6ed59dad7cfeccd293164ac2d1f24b363cc7a949813', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-07-04 18:41:13.461692 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4c4afcb494fb84e0fde7381150a724c4fca8af4bf185508eaa546c00ed32211e', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-04 18:41:13.461703 | orchestrator | skipping: [testbed-node-3] => (item={'id': '811150b92bcb3c1ac5f1b7c475b8940ecb7e7453b86d1bf4c7799525768a125e', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-04 18:41:13.461720 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7b6c258fd6444e1e83ab5fdb6164e95417685a1ad37efc8fd1bc38f928ea47f1', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-07-04 18:41:13.747875 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9caf0272db073daa99e2edc70973cdaa230be9b7eccf45314ca4b01ed5b69b50', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-04 18:41:13.748061 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9664d24499e123b9ca117de120b8c6ea564f2b7c802fe79c657f6f06edb2682c', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-04 18:41:13.748089 | orchestrator | skipping: [testbed-node-3] => (item={'id': '016c9b1314c52711a69f25ea470be38d251470526f8258f3e17ea07d65f648a4', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 33 minutes'})  2025-07-04 18:41:13.748138 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3bd01ce4fb11d1524b44838ffe3aa94c176552583dc6caf6a7a68c259138a30a', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-04 18:41:13.748160 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f54b8b4a324cba097ce30b01288f1621dac6ddc0850a8c5eea73e665b96f2cab', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-07-04 18:41:13.748182 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4ed1353b19263e6e3a3d33afb4d01aecf9d0bea64ac2feefd321f468d680fc92', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-07-04 18:41:13.748202 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c6533828e7f321a4d45607c2c461f879cfc37e7e47ea684ada9d66f461c8d692', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-04 18:41:13.748223 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4f1341e907039f87429c7b25d44c26ea4449efc8a65a091ad2c08c9d64855b83', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-04 18:41:13.748243 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b52a179c62fc6d8ef463a68d3b427c7c801c2d180f282c516890cfd6eb4f09c9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-04 18:41:13.748262 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f73c410317c5552c9da7726a0ea3a862daba39530b28df874b9d200c6c21eb73', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-04 18:41:13.748301 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f4be29c5f0ca8ec5e3e0631fbbbf409e9d6bae33887b6078866573fc0a983b66', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-04 18:41:13.748326 | orchestrator | ok: [testbed-node-4] => (item={'id': '438f5bf913e695c0cf1b2ca736e6822248b8f7d2e6134f8510b7ba2d959f207e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-04 18:41:13.748346 | orchestrator | ok: [testbed-node-4] => (item={'id': '644933e015f83f1da4761b787a5168bffb39b0d121f399d39f8628dbbe459ca7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-04 18:41:13.748366 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd175f901b05870ed8cbd90b182861425ca2225427833e176f8da9ff04a5235a1', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-04 18:41:13.748414 | orchestrator | skipping: [testbed-node-4] => (item={'id': '72b7f7a44ce2ff0c16998089a5b309da0a328037aef0970cd28b1b8ce9ce7cb9', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-07-04 18:41:13.748436 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c06d4cb5e81d9aa3b6580980ac6c6bdfafcf2bad709127d064fb3a27435f0a10', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-07-04 18:41:13.748469 | orchestrator | skipping: [testbed-node-4] => (item={'id': '33c41dd0a3dcadc574b9e0660912e553f405b45cd7279ec2d2a891dfe8998830', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-04 18:41:13.748491 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0e1eb3a5ac5c4e72dda8c0d180a18da271e8c6fc39d3f7c6bbb1d3e13c43333d', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-04 18:41:13.748513 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b6d83a487d7978aa65136d1de2d09a7c12f40faeaabc2b0263af31c02ea5420d', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 33 minutes'})  2025-07-04 18:41:13.748534 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9b797acb5e97ce11cc9bcb5b8ef6674915a59bbf046b725c69a2e5c09e1bd084', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-04 18:41:13.748554 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b6205650da94d2407f1cf0f33b6fc9320a1df44f841759d0ff5cecdcb7728e8b', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-04 18:41:13.748574 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e04605e950753978d73a18132cf37fafb1d58a73d36dad06c0459d69351bed0d', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-07-04 18:41:13.748595 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8f9822aaa85a8d67886c62cf6805f08d43e8e88dabffc3868349dcd0e206fac2', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-04 18:41:13.748614 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd063701a2ed9139626d67a46518a8a4bc1877ca9b7060de62bc933ee7a1e6a23', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-04 18:41:13.748632 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0463ef8bd701c028fa6f0aba03657caf322c556205b653055c05458d0f26680e', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-07-04 18:41:13.748652 | orchestrator | skipping: [testbed-node-5] => (item={'id': '562061d532f2d22bcc575ac99212a1aaa9e83f9fcaf121f96e9636cd912d627c', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-07-04 18:41:13.748672 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6e60dedabe702be933712a35cd13c43c43e0a2e5c929f1a8ef2844bf63928f90', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-04 18:41:13.748692 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c4d51fca19012ed87a66059c9c8d784d776913d5c23868552a098ac39109756c', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-04 18:41:13.748711 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c5d21f80cac624e9fac879b0e63c6132f345880f48ff5bde09cfacaa4a5a5325', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-04 18:41:13.748768 | orchestrator | skipping: [testbed-node-5] => (item={'id': '73d6339408e608ab8abee69c55e45fcf033cb7a2957cc4990bad223d787193e2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-04 18:41:22.223860 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1a676c62fd853c898730fc89e49d7e289826a7b3146e2c0210c966da84895498', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-04 18:41:22.224042 | orchestrator | ok: [testbed-node-5] => (item={'id': '0cfc8fd86986f421c4b419716735f659400b1ad2b0d30e7c9e638801a8fb256d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-04 18:41:22.224063 | orchestrator | ok: [testbed-node-5] => (item={'id': 'c4b63e0aeebdefe4d321dcc66b24f22f101aef5d67bd2b7dd0af70bc005ac11c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-04 18:41:22.224075 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bc06042f60051d46a664498feea02ee44dfc822de5e7532c0f9a6842d840829a', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-04 18:41:22.224089 | orchestrator | skipping: [testbed-node-5] => (item={'id': '437539ad945d2b6fb09773106149c354ac33b67013d42453b89413de22e297ab', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-07-04 18:41:22.224102 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8ac240ddc09865ae5b6cc201a00d83989225e3288324847b725da68176de6ef3', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-07-04 18:41:22.224113 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4c1fdd573721d3f1bb8475236291a01b6bfee68edc265330c6884d8c8cee2a4b', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-04 18:41:22.224125 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6d31a173e033dc745ce9aebe3ba7223b4be7f0d99488c74181fb0ae750de5280', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-04 18:41:22.224137 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e8c0be8fc3172f1c3c91f73f4230dd8f4720e4aa3918aed5b08ef4b0802a12a7', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 33 minutes'})  2025-07-04 18:41:22.224149 | orchestrator | 2025-07-04 18:41:22.224163 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-07-04 18:41:22.224175 | orchestrator | Friday 04 July 2025 18:41:13 +0000 (0:00:00.501) 0:00:05.113 *********** 2025-07-04 18:41:22.224186 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:22.224198 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:22.224209 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:22.224220 | orchestrator | 2025-07-04 18:41:22.224231 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-07-04 18:41:22.224259 | orchestrator | Friday 04 July 2025 18:41:14 +0000 (0:00:00.339) 0:00:05.453 *********** 2025-07-04 18:41:22.224271 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:22.224283 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:41:22.224294 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:41:22.224305 | orchestrator | 2025-07-04 18:41:22.224316 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-07-04 18:41:22.224327 | orchestrator | Friday 04 July 2025 18:41:14 +0000 (0:00:00.580) 0:00:06.033 *********** 2025-07-04 18:41:22.224360 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:22.224372 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:22.224384 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:22.224397 | orchestrator | 2025-07-04 18:41:22.224409 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-04 18:41:22.224422 | orchestrator | Friday 04 July 2025 18:41:14 +0000 (0:00:00.317) 0:00:06.351 *********** 2025-07-04 18:41:22.224434 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:22.224446 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:22.224458 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:22.224470 | orchestrator | 2025-07-04 18:41:22.224482 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-07-04 18:41:22.224495 | orchestrator | Friday 04 July 2025 18:41:15 +0000 (0:00:00.393) 0:00:06.745 *********** 2025-07-04 18:41:22.224507 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-07-04 18:41:22.224521 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-07-04 18:41:22.224533 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:22.224546 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-07-04 18:41:22.224562 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-07-04 18:41:22.224606 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:41:22.224624 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-07-04 18:41:22.224637 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-07-04 18:41:22.224650 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:41:22.224662 | orchestrator | 2025-07-04 18:41:22.224675 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-07-04 18:41:22.224688 | orchestrator | Friday 04 July 2025 18:41:15 +0000 (0:00:00.363) 0:00:07.108 *********** 2025-07-04 18:41:22.224701 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:22.224714 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:22.224726 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:22.224739 | orchestrator | 2025-07-04 18:41:22.224749 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-04 18:41:22.224760 | orchestrator | Friday 04 July 2025 18:41:16 +0000 (0:00:00.495) 0:00:07.604 *********** 2025-07-04 18:41:22.224771 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:22.224782 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:41:22.224792 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:41:22.224803 | orchestrator | 2025-07-04 18:41:22.224814 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-04 18:41:22.224824 | orchestrator | Friday 04 July 2025 18:41:16 +0000 (0:00:00.290) 0:00:07.894 *********** 2025-07-04 18:41:22.224835 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:22.224846 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:41:22.224863 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:41:22.224882 | orchestrator | 2025-07-04 18:41:22.224898 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-07-04 18:41:22.224909 | orchestrator | Friday 04 July 2025 18:41:16 +0000 (0:00:00.286) 0:00:08.181 *********** 2025-07-04 18:41:22.224920 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:22.224930 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:22.225008 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:22.225020 | orchestrator | 2025-07-04 18:41:22.225031 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-04 18:41:22.225042 | orchestrator | Friday 04 July 2025 18:41:17 +0000 (0:00:00.311) 0:00:08.492 *********** 2025-07-04 18:41:22.225053 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:22.225064 | orchestrator | 2025-07-04 18:41:22.225074 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-04 18:41:22.225096 | orchestrator | Friday 04 July 2025 18:41:17 +0000 (0:00:00.682) 0:00:09.174 *********** 2025-07-04 18:41:22.225106 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:22.225117 | orchestrator | 2025-07-04 18:41:22.225128 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-04 18:41:22.225139 | orchestrator | Friday 04 July 2025 18:41:18 +0000 (0:00:00.251) 0:00:09.426 *********** 2025-07-04 18:41:22.225149 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:22.225160 | orchestrator | 2025-07-04 18:41:22.225171 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:41:22.225182 | orchestrator | Friday 04 July 2025 18:41:18 +0000 (0:00:00.238) 0:00:09.664 *********** 2025-07-04 18:41:22.225192 | orchestrator | 2025-07-04 18:41:22.225203 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:41:22.225214 | orchestrator | Friday 04 July 2025 18:41:18 +0000 (0:00:00.079) 0:00:09.743 *********** 2025-07-04 18:41:22.225225 | orchestrator | 2025-07-04 18:41:22.225236 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:41:22.225246 | orchestrator | Friday 04 July 2025 18:41:18 +0000 (0:00:00.074) 0:00:09.818 *********** 2025-07-04 18:41:22.225257 | orchestrator | 2025-07-04 18:41:22.225268 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-04 18:41:22.225278 | orchestrator | Friday 04 July 2025 18:41:18 +0000 (0:00:00.071) 0:00:09.889 *********** 2025-07-04 18:41:22.225289 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:22.225300 | orchestrator | 2025-07-04 18:41:22.225311 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-07-04 18:41:22.225322 | orchestrator | Friday 04 July 2025 18:41:18 +0000 (0:00:00.257) 0:00:10.147 *********** 2025-07-04 18:41:22.225339 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:22.225350 | orchestrator | 2025-07-04 18:41:22.225361 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-04 18:41:22.225372 | orchestrator | Friday 04 July 2025 18:41:19 +0000 (0:00:00.264) 0:00:10.411 *********** 2025-07-04 18:41:22.225389 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:22.225408 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:22.225427 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:22.225438 | orchestrator | 2025-07-04 18:41:22.225449 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-07-04 18:41:22.225460 | orchestrator | Friday 04 July 2025 18:41:19 +0000 (0:00:00.282) 0:00:10.693 *********** 2025-07-04 18:41:22.225471 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:22.225481 | orchestrator | 2025-07-04 18:41:22.225492 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-07-04 18:41:22.225503 | orchestrator | Friday 04 July 2025 18:41:19 +0000 (0:00:00.651) 0:00:11.345 *********** 2025-07-04 18:41:22.225513 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-04 18:41:22.225524 | orchestrator | 2025-07-04 18:41:22.225534 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-07-04 18:41:22.225545 | orchestrator | Friday 04 July 2025 18:41:21 +0000 (0:00:01.661) 0:00:13.007 *********** 2025-07-04 18:41:22.225556 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:22.225566 | orchestrator | 2025-07-04 18:41:22.225576 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-07-04 18:41:22.225587 | orchestrator | Friday 04 July 2025 18:41:21 +0000 (0:00:00.145) 0:00:13.152 *********** 2025-07-04 18:41:22.225598 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:22.225608 | orchestrator | 2025-07-04 18:41:22.225619 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-07-04 18:41:22.225630 | orchestrator | Friday 04 July 2025 18:41:22 +0000 (0:00:00.313) 0:00:13.466 *********** 2025-07-04 18:41:22.225648 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:34.941291 | orchestrator | 2025-07-04 18:41:34.941383 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-07-04 18:41:34.941409 | orchestrator | Friday 04 July 2025 18:41:22 +0000 (0:00:00.140) 0:00:13.606 *********** 2025-07-04 18:41:34.941416 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:34.941423 | orchestrator | 2025-07-04 18:41:34.941430 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-04 18:41:34.941436 | orchestrator | Friday 04 July 2025 18:41:22 +0000 (0:00:00.139) 0:00:13.746 *********** 2025-07-04 18:41:34.941441 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:34.941447 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:34.941453 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:34.941458 | orchestrator | 2025-07-04 18:41:34.941464 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-07-04 18:41:34.941470 | orchestrator | Friday 04 July 2025 18:41:22 +0000 (0:00:00.287) 0:00:14.034 *********** 2025-07-04 18:41:34.941476 | orchestrator | changed: [testbed-node-3] 2025-07-04 18:41:34.941482 | orchestrator | changed: [testbed-node-5] 2025-07-04 18:41:34.941491 | orchestrator | changed: [testbed-node-4] 2025-07-04 18:41:34.941500 | orchestrator | 2025-07-04 18:41:34.941510 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-07-04 18:41:34.941520 | orchestrator | Friday 04 July 2025 18:41:25 +0000 (0:00:02.638) 0:00:16.673 *********** 2025-07-04 18:41:34.941530 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:34.941540 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:34.941549 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:34.941558 | orchestrator | 2025-07-04 18:41:34.941567 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-07-04 18:41:34.941577 | orchestrator | Friday 04 July 2025 18:41:25 +0000 (0:00:00.338) 0:00:17.011 *********** 2025-07-04 18:41:34.941587 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:34.941596 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:34.941605 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:34.941615 | orchestrator | 2025-07-04 18:41:34.941624 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-07-04 18:41:34.941634 | orchestrator | Friday 04 July 2025 18:41:26 +0000 (0:00:00.508) 0:00:17.520 *********** 2025-07-04 18:41:34.941644 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:34.941653 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:41:34.941658 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:41:34.941664 | orchestrator | 2025-07-04 18:41:34.941670 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-07-04 18:41:34.941676 | orchestrator | Friday 04 July 2025 18:41:26 +0000 (0:00:00.332) 0:00:17.852 *********** 2025-07-04 18:41:34.941682 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:34.941688 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:34.941694 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:34.941699 | orchestrator | 2025-07-04 18:41:34.941705 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-07-04 18:41:34.941711 | orchestrator | Friday 04 July 2025 18:41:26 +0000 (0:00:00.503) 0:00:18.356 *********** 2025-07-04 18:41:34.941717 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:34.941723 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:41:34.941728 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:41:34.941734 | orchestrator | 2025-07-04 18:41:34.941740 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-07-04 18:41:34.941746 | orchestrator | Friday 04 July 2025 18:41:27 +0000 (0:00:00.294) 0:00:18.650 *********** 2025-07-04 18:41:34.941752 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:34.941759 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:41:34.941765 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:41:34.941771 | orchestrator | 2025-07-04 18:41:34.941777 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-04 18:41:34.941784 | orchestrator | Friday 04 July 2025 18:41:27 +0000 (0:00:00.284) 0:00:18.934 *********** 2025-07-04 18:41:34.941790 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:34.941796 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:34.941809 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:34.941816 | orchestrator | 2025-07-04 18:41:34.941822 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-07-04 18:41:34.941828 | orchestrator | Friday 04 July 2025 18:41:28 +0000 (0:00:00.488) 0:00:19.422 *********** 2025-07-04 18:41:34.941835 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:34.941841 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:34.941847 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:34.941854 | orchestrator | 2025-07-04 18:41:34.941862 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-07-04 18:41:34.941869 | orchestrator | Friday 04 July 2025 18:41:28 +0000 (0:00:00.716) 0:00:20.139 *********** 2025-07-04 18:41:34.941877 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:34.941884 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:34.941891 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:34.941898 | orchestrator | 2025-07-04 18:41:34.941905 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-07-04 18:41:34.941912 | orchestrator | Friday 04 July 2025 18:41:29 +0000 (0:00:00.309) 0:00:20.448 *********** 2025-07-04 18:41:34.941919 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:34.941926 | orchestrator | skipping: [testbed-node-4] 2025-07-04 18:41:34.941933 | orchestrator | skipping: [testbed-node-5] 2025-07-04 18:41:34.941940 | orchestrator | 2025-07-04 18:41:34.941947 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-07-04 18:41:34.941955 | orchestrator | Friday 04 July 2025 18:41:29 +0000 (0:00:00.291) 0:00:20.740 *********** 2025-07-04 18:41:34.941980 | orchestrator | ok: [testbed-node-3] 2025-07-04 18:41:34.941988 | orchestrator | ok: [testbed-node-4] 2025-07-04 18:41:34.941996 | orchestrator | ok: [testbed-node-5] 2025-07-04 18:41:34.942003 | orchestrator | 2025-07-04 18:41:34.942010 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-04 18:41:34.942066 | orchestrator | Friday 04 July 2025 18:41:29 +0000 (0:00:00.302) 0:00:21.043 *********** 2025-07-04 18:41:34.942074 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:34.942082 | orchestrator | 2025-07-04 18:41:34.942089 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-04 18:41:34.942096 | orchestrator | Friday 04 July 2025 18:41:30 +0000 (0:00:00.685) 0:00:21.728 *********** 2025-07-04 18:41:34.942103 | orchestrator | skipping: [testbed-node-3] 2025-07-04 18:41:34.942110 | orchestrator | 2025-07-04 18:41:34.942132 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-04 18:41:34.942140 | orchestrator | Friday 04 July 2025 18:41:30 +0000 (0:00:00.268) 0:00:21.996 *********** 2025-07-04 18:41:34.942147 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:34.942154 | orchestrator | 2025-07-04 18:41:34.942162 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-04 18:41:34.942169 | orchestrator | Friday 04 July 2025 18:41:32 +0000 (0:00:01.648) 0:00:23.645 *********** 2025-07-04 18:41:34.942176 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:34.942182 | orchestrator | 2025-07-04 18:41:34.942188 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-04 18:41:34.942194 | orchestrator | Friday 04 July 2025 18:41:32 +0000 (0:00:00.255) 0:00:23.901 *********** 2025-07-04 18:41:34.942200 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:34.942207 | orchestrator | 2025-07-04 18:41:34.942213 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:41:34.942219 | orchestrator | Friday 04 July 2025 18:41:32 +0000 (0:00:00.262) 0:00:24.163 *********** 2025-07-04 18:41:34.942225 | orchestrator | 2025-07-04 18:41:34.942232 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:41:34.942238 | orchestrator | Friday 04 July 2025 18:41:32 +0000 (0:00:00.082) 0:00:24.245 *********** 2025-07-04 18:41:34.942244 | orchestrator | 2025-07-04 18:41:34.942251 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-04 18:41:34.942263 | orchestrator | Friday 04 July 2025 18:41:32 +0000 (0:00:00.070) 0:00:24.316 *********** 2025-07-04 18:41:34.942269 | orchestrator | 2025-07-04 18:41:34.942275 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-04 18:41:34.942282 | orchestrator | Friday 04 July 2025 18:41:33 +0000 (0:00:00.074) 0:00:24.391 *********** 2025-07-04 18:41:34.942288 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-04 18:41:34.942294 | orchestrator | 2025-07-04 18:41:34.942300 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-04 18:41:34.942306 | orchestrator | Friday 04 July 2025 18:41:34 +0000 (0:00:01.290) 0:00:25.682 *********** 2025-07-04 18:41:34.942312 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-07-04 18:41:34.942319 | orchestrator |  "msg": [ 2025-07-04 18:41:34.942326 | orchestrator |  "Validator run completed.", 2025-07-04 18:41:34.942332 | orchestrator |  "You can find the report file here:", 2025-07-04 18:41:34.942338 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-07-04T18:41:09+00:00-report.json", 2025-07-04 18:41:34.942346 | orchestrator |  "on the following host:", 2025-07-04 18:41:34.942352 | orchestrator |  "testbed-manager" 2025-07-04 18:41:34.942359 | orchestrator |  ] 2025-07-04 18:41:34.942365 | orchestrator | } 2025-07-04 18:41:34.942371 | orchestrator | 2025-07-04 18:41:34.942378 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:41:34.942385 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-07-04 18:41:34.942428 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-04 18:41:34.942435 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-04 18:41:34.942442 | orchestrator | 2025-07-04 18:41:34.942448 | orchestrator | 2025-07-04 18:41:34.942454 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:41:34.942460 | orchestrator | Friday 04 July 2025 18:41:34 +0000 (0:00:00.617) 0:00:26.299 *********** 2025-07-04 18:41:34.942467 | orchestrator | =============================================================================== 2025-07-04 18:41:34.942476 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.64s 2025-07-04 18:41:34.942482 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.66s 2025-07-04 18:41:34.942489 | orchestrator | Aggregate test results step one ----------------------------------------- 1.65s 2025-07-04 18:41:34.942495 | orchestrator | Write report file ------------------------------------------------------- 1.29s 2025-07-04 18:41:34.942501 | orchestrator | Create report output directory ------------------------------------------ 0.94s 2025-07-04 18:41:34.942507 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.72s 2025-07-04 18:41:34.942514 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.69s 2025-07-04 18:41:34.942520 | orchestrator | Aggregate test results step one ----------------------------------------- 0.68s 2025-07-04 18:41:34.942526 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-07-04 18:41:34.942532 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.65s 2025-07-04 18:41:34.942538 | orchestrator | Print report file information ------------------------------------------- 0.62s 2025-07-04 18:41:34.942544 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.58s 2025-07-04 18:41:34.942550 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.58s 2025-07-04 18:41:34.942557 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.51s 2025-07-04 18:41:34.942563 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2025-07-04 18:41:34.942574 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.50s 2025-07-04 18:41:34.942585 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.50s 2025-07-04 18:41:35.257388 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.50s 2025-07-04 18:41:35.257468 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2025-07-04 18:41:35.257475 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.40s 2025-07-04 18:41:35.568163 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-07-04 18:41:35.574008 | orchestrator | + set -e 2025-07-04 18:41:35.574125 | orchestrator | + source /opt/manager-vars.sh 2025-07-04 18:41:35.574141 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-04 18:41:35.574152 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-04 18:41:35.574163 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-04 18:41:35.574170 | orchestrator | ++ CEPH_VERSION=reef 2025-07-04 18:41:35.574176 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-04 18:41:35.574184 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-04 18:41:35.574190 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-04 18:41:35.574197 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-04 18:41:35.574203 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-04 18:41:35.574209 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-04 18:41:35.574216 | orchestrator | ++ export ARA=false 2025-07-04 18:41:35.574222 | orchestrator | ++ ARA=false 2025-07-04 18:41:35.574229 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-04 18:41:35.574235 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-04 18:41:35.574241 | orchestrator | ++ export TEMPEST=false 2025-07-04 18:41:35.574247 | orchestrator | ++ TEMPEST=false 2025-07-04 18:41:35.574253 | orchestrator | ++ export IS_ZUUL=true 2025-07-04 18:41:35.574260 | orchestrator | ++ IS_ZUUL=true 2025-07-04 18:41:35.574266 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-07-04 18:41:35.574272 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-07-04 18:41:35.574279 | orchestrator | ++ export EXTERNAL_API=false 2025-07-04 18:41:35.574285 | orchestrator | ++ EXTERNAL_API=false 2025-07-04 18:41:35.574291 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-04 18:41:35.574297 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-04 18:41:35.574303 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-04 18:41:35.574309 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-04 18:41:35.574315 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-04 18:41:35.574321 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-04 18:41:35.574328 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-04 18:41:35.574334 | orchestrator | + source /etc/os-release 2025-07-04 18:41:35.574340 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-07-04 18:41:35.574347 | orchestrator | ++ NAME=Ubuntu 2025-07-04 18:41:35.574353 | orchestrator | ++ VERSION_ID=24.04 2025-07-04 18:41:35.574359 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-07-04 18:41:35.574365 | orchestrator | ++ VERSION_CODENAME=noble 2025-07-04 18:41:35.574371 | orchestrator | ++ ID=ubuntu 2025-07-04 18:41:35.574377 | orchestrator | ++ ID_LIKE=debian 2025-07-04 18:41:35.574384 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-07-04 18:41:35.574390 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-07-04 18:41:35.574396 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-07-04 18:41:35.574402 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-07-04 18:41:35.574409 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-07-04 18:41:35.574416 | orchestrator | ++ LOGO=ubuntu-logo 2025-07-04 18:41:35.574422 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-07-04 18:41:35.574429 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-07-04 18:41:35.574437 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-04 18:41:35.602364 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-04 18:41:59.382681 | orchestrator | 2025-07-04 18:41:59.382856 | orchestrator | # Status of Elasticsearch 2025-07-04 18:41:59.382897 | orchestrator | 2025-07-04 18:41:59.382916 | orchestrator | + pushd /opt/configuration/contrib 2025-07-04 18:41:59.382934 | orchestrator | + echo 2025-07-04 18:41:59.382951 | orchestrator | + echo '# Status of Elasticsearch' 2025-07-04 18:41:59.382969 | orchestrator | + echo 2025-07-04 18:41:59.382985 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-07-04 18:41:59.571196 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-07-04 18:41:59.571279 | orchestrator | 2025-07-04 18:41:59.571290 | orchestrator | # Status of MariaDB 2025-07-04 18:41:59.571298 | orchestrator | 2025-07-04 18:41:59.571305 | orchestrator | + echo 2025-07-04 18:41:59.571312 | orchestrator | + echo '# Status of MariaDB' 2025-07-04 18:41:59.571319 | orchestrator | + echo 2025-07-04 18:41:59.571326 | orchestrator | + MARIADB_USER=root_shard_0 2025-07-04 18:41:59.571333 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-07-04 18:41:59.638985 | orchestrator | Reading package lists... 2025-07-04 18:41:59.983148 | orchestrator | Building dependency tree... 2025-07-04 18:41:59.983797 | orchestrator | Reading state information... 2025-07-04 18:42:00.398703 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-07-04 18:42:00.398824 | orchestrator | bc set to manually installed. 2025-07-04 18:42:00.398841 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-07-04 18:42:01.071100 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-07-04 18:42:01.072217 | orchestrator | 2025-07-04 18:42:01.072297 | orchestrator | # Status of Prometheus 2025-07-04 18:42:01.072320 | orchestrator | 2025-07-04 18:42:01.072341 | orchestrator | + echo 2025-07-04 18:42:01.072361 | orchestrator | + echo '# Status of Prometheus' 2025-07-04 18:42:01.072380 | orchestrator | + echo 2025-07-04 18:42:01.072399 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-07-04 18:42:01.137757 | orchestrator | Unauthorized 2025-07-04 18:42:01.141066 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-07-04 18:42:01.211109 | orchestrator | Unauthorized 2025-07-04 18:42:01.213884 | orchestrator | 2025-07-04 18:42:01.213920 | orchestrator | # Status of RabbitMQ 2025-07-04 18:42:01.213933 | orchestrator | 2025-07-04 18:42:01.213946 | orchestrator | + echo 2025-07-04 18:42:01.213957 | orchestrator | + echo '# Status of RabbitMQ' 2025-07-04 18:42:01.213968 | orchestrator | + echo 2025-07-04 18:42:01.213980 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-07-04 18:42:01.674399 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-07-04 18:42:01.684740 | orchestrator | 2025-07-04 18:42:01.684839 | orchestrator | # Status of Redis 2025-07-04 18:42:01.684863 | orchestrator | 2025-07-04 18:42:01.684882 | orchestrator | + echo 2025-07-04 18:42:01.684900 | orchestrator | + echo '# Status of Redis' 2025-07-04 18:42:01.684920 | orchestrator | + echo 2025-07-04 18:42:01.684938 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-07-04 18:42:01.690711 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002177s;;;0.000000;10.000000 2025-07-04 18:42:01.691167 | orchestrator | 2025-07-04 18:42:01.691199 | orchestrator | # Create backup of MariaDB database 2025-07-04 18:42:01.691213 | orchestrator | 2025-07-04 18:42:01.691226 | orchestrator | + popd 2025-07-04 18:42:01.691237 | orchestrator | + echo 2025-07-04 18:42:01.691248 | orchestrator | + echo '# Create backup of MariaDB database' 2025-07-04 18:42:01.691259 | orchestrator | + echo 2025-07-04 18:42:01.691271 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-07-04 18:42:03.490251 | orchestrator | 2025-07-04 18:42:03 | INFO  | Task 6f99271d-459a-4875-9785-f682c0d906db (mariadb_backup) was prepared for execution. 2025-07-04 18:42:03.490355 | orchestrator | 2025-07-04 18:42:03 | INFO  | It takes a moment until task 6f99271d-459a-4875-9785-f682c0d906db (mariadb_backup) has been started and output is visible here. 2025-07-04 18:42:07.532767 | orchestrator | 2025-07-04 18:42:07.533255 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-04 18:42:07.538212 | orchestrator | 2025-07-04 18:42:07.538357 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-04 18:42:07.539367 | orchestrator | Friday 04 July 2025 18:42:07 +0000 (0:00:00.179) 0:00:00.179 *********** 2025-07-04 18:42:07.727899 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:42:07.841419 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:42:07.841810 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:42:07.842281 | orchestrator | 2025-07-04 18:42:07.844356 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-04 18:42:07.844436 | orchestrator | Friday 04 July 2025 18:42:07 +0000 (0:00:00.311) 0:00:00.491 *********** 2025-07-04 18:42:08.416472 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-04 18:42:08.416774 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-04 18:42:08.418235 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-04 18:42:08.419506 | orchestrator | 2025-07-04 18:42:08.419958 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-04 18:42:08.421251 | orchestrator | 2025-07-04 18:42:08.422130 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-04 18:42:08.422758 | orchestrator | Friday 04 July 2025 18:42:08 +0000 (0:00:00.572) 0:00:01.064 *********** 2025-07-04 18:42:08.850012 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-04 18:42:08.850264 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-04 18:42:08.853676 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-04 18:42:08.853747 | orchestrator | 2025-07-04 18:42:08.853766 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-04 18:42:08.853781 | orchestrator | Friday 04 July 2025 18:42:08 +0000 (0:00:00.434) 0:00:01.498 *********** 2025-07-04 18:42:09.401495 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-04 18:42:09.401719 | orchestrator | 2025-07-04 18:42:09.402576 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-07-04 18:42:09.405980 | orchestrator | Friday 04 July 2025 18:42:09 +0000 (0:00:00.552) 0:00:02.051 *********** 2025-07-04 18:42:12.549933 | orchestrator | ok: [testbed-node-0] 2025-07-04 18:42:12.550496 | orchestrator | ok: [testbed-node-1] 2025-07-04 18:42:12.555169 | orchestrator | ok: [testbed-node-2] 2025-07-04 18:42:12.555438 | orchestrator | 2025-07-04 18:42:12.556142 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-07-04 18:42:12.556793 | orchestrator | Friday 04 July 2025 18:42:12 +0000 (0:00:03.144) 0:00:05.196 *********** 2025-07-04 18:42:30.412064 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-04 18:42:30.412189 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-07-04 18:42:30.412203 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-04 18:42:30.412706 | orchestrator | mariadb_bootstrap_restart 2025-07-04 18:42:30.490233 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:42:30.490338 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:42:30.492866 | orchestrator | changed: [testbed-node-0] 2025-07-04 18:42:30.494200 | orchestrator | 2025-07-04 18:42:30.495168 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-04 18:42:30.496195 | orchestrator | skipping: no hosts matched 2025-07-04 18:42:30.496909 | orchestrator | 2025-07-04 18:42:30.497807 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-04 18:42:30.498574 | orchestrator | skipping: no hosts matched 2025-07-04 18:42:30.499348 | orchestrator | 2025-07-04 18:42:30.500742 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-04 18:42:30.501024 | orchestrator | skipping: no hosts matched 2025-07-04 18:42:30.504261 | orchestrator | 2025-07-04 18:42:30.504339 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-04 18:42:30.504354 | orchestrator | 2025-07-04 18:42:30.505223 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-04 18:42:30.506134 | orchestrator | Friday 04 July 2025 18:42:30 +0000 (0:00:17.943) 0:00:23.139 *********** 2025-07-04 18:42:30.670316 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:42:30.798542 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:42:30.798962 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:42:30.800015 | orchestrator | 2025-07-04 18:42:30.801692 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-04 18:42:30.802599 | orchestrator | Friday 04 July 2025 18:42:30 +0000 (0:00:00.307) 0:00:23.447 *********** 2025-07-04 18:42:31.173317 | orchestrator | skipping: [testbed-node-0] 2025-07-04 18:42:31.219418 | orchestrator | skipping: [testbed-node-1] 2025-07-04 18:42:31.219932 | orchestrator | skipping: [testbed-node-2] 2025-07-04 18:42:31.221364 | orchestrator | 2025-07-04 18:42:31.222524 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:42:31.223169 | orchestrator | 2025-07-04 18:42:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 18:42:31.223278 | orchestrator | 2025-07-04 18:42:31 | INFO  | Please wait and do not abort execution. 2025-07-04 18:42:31.224485 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-04 18:42:31.225347 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-04 18:42:31.226458 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-04 18:42:31.227578 | orchestrator | 2025-07-04 18:42:31.228549 | orchestrator | 2025-07-04 18:42:31.229212 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:42:31.229751 | orchestrator | Friday 04 July 2025 18:42:31 +0000 (0:00:00.421) 0:00:23.869 *********** 2025-07-04 18:42:31.230832 | orchestrator | =============================================================================== 2025-07-04 18:42:31.231543 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.94s 2025-07-04 18:42:31.232305 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.14s 2025-07-04 18:42:31.234143 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-07-04 18:42:31.234993 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.55s 2025-07-04 18:42:31.235568 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2025-07-04 18:42:31.236484 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.42s 2025-07-04 18:42:31.236929 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-07-04 18:42:31.237511 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2025-07-04 18:42:31.808719 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-07-04 18:42:31.814509 | orchestrator | + set -e 2025-07-04 18:42:31.814554 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-04 18:42:31.814563 | orchestrator | ++ export INTERACTIVE=false 2025-07-04 18:42:31.814573 | orchestrator | ++ INTERACTIVE=false 2025-07-04 18:42:31.814580 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-04 18:42:31.814588 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-04 18:42:31.814596 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-04 18:42:31.815664 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-04 18:42:31.822395 | orchestrator | 2025-07-04 18:42:31.822506 | orchestrator | # OpenStack endpoints 2025-07-04 18:42:31.822523 | orchestrator | 2025-07-04 18:42:31.822536 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-04 18:42:31.822548 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-04 18:42:31.822559 | orchestrator | + export OS_CLOUD=admin 2025-07-04 18:42:31.822570 | orchestrator | + OS_CLOUD=admin 2025-07-04 18:42:31.822581 | orchestrator | + echo 2025-07-04 18:42:31.822592 | orchestrator | + echo '# OpenStack endpoints' 2025-07-04 18:42:31.822603 | orchestrator | + echo 2025-07-04 18:42:31.822615 | orchestrator | + openstack endpoint list 2025-07-04 18:42:35.237618 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-04 18:42:35.237766 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-07-04 18:42:35.238561 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-04 18:42:35.238581 | orchestrator | | 1bf3fc5b0d364d8cbff92c5d3b145d60 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-07-04 18:42:35.238588 | orchestrator | | 246ee9ffa5714e12850b764d99632345 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-07-04 18:42:35.238594 | orchestrator | | 290d686aea3045e781a984d5c88a4163 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-07-04 18:42:35.238600 | orchestrator | | 2b7117de189942139f64e154589ce550 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-07-04 18:42:35.238605 | orchestrator | | 2dd52c4b3b644cda9714b499fa371423 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-07-04 18:42:35.238613 | orchestrator | | 30c78ea2a61d4044a1fdac3e9ef04683 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-07-04 18:42:35.238622 | orchestrator | | 431222d17a78427b9e3bbe9b48c24aa2 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-07-04 18:42:35.238631 | orchestrator | | 4af363495eb541e3b6e0fa633603396a | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-07-04 18:42:35.238641 | orchestrator | | 4ddd7300d6644b3a85d00370d14ba82a | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-07-04 18:42:35.238649 | orchestrator | | 4ec63836c57142cd904daa49c63c2be9 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-07-04 18:42:35.238658 | orchestrator | | 62ba969f056f4d22862795aeaa1ce43d | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-07-04 18:42:35.238667 | orchestrator | | 735cd08c39154af28275e644234bd8ee | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-07-04 18:42:35.238676 | orchestrator | | 7cd80a8164f7461fbdae3d1572623388 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-04 18:42:35.238685 | orchestrator | | 9b12a5ef44f846f4aea4006aa8e283e3 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-07-04 18:42:35.238695 | orchestrator | | a86051e802074f3497b2e1c97de8e3b0 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-07-04 18:42:35.238703 | orchestrator | | be606a3f645d4e2e90b7c79d85af5583 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-07-04 18:42:35.238712 | orchestrator | | cb1efb4a38244b219fec4c09562697a1 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-07-04 18:42:35.238721 | orchestrator | | cd1b1e4503ef42bc8ebba2f96e0f7d8f | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-04 18:42:35.238741 | orchestrator | | dc49954c6958494aa051f5ac45d83137 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-07-04 18:42:35.238750 | orchestrator | | ef81fa6456ff4a70b424a979e7d4160c | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-04 18:42:35.238775 | orchestrator | | f37d7849b5ba4bb2b24439470131ab58 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-07-04 18:42:35.238785 | orchestrator | | ff3ac02c7ef2471f970aabdd0af62307 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-04 18:42:35.238794 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-04 18:42:35.541430 | orchestrator | 2025-07-04 18:42:35.541531 | orchestrator | # Cinder 2025-07-04 18:42:35.541547 | orchestrator | 2025-07-04 18:42:35.541559 | orchestrator | + echo 2025-07-04 18:42:35.541571 | orchestrator | + echo '# Cinder' 2025-07-04 18:42:35.541582 | orchestrator | + echo 2025-07-04 18:42:35.541593 | orchestrator | + openstack volume service list 2025-07-04 18:42:38.842831 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-04 18:42:38.842933 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-07-04 18:42:38.842947 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-04 18:42:38.842980 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-04T18:42:36.000000 | 2025-07-04 18:42:38.842992 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-04T18:42:35.000000 | 2025-07-04 18:42:38.843003 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-04T18:42:36.000000 | 2025-07-04 18:42:38.843018 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-07-04T18:42:35.000000 | 2025-07-04 18:42:38.843029 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-07-04T18:42:37.000000 | 2025-07-04 18:42:38.843040 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-07-04T18:42:29.000000 | 2025-07-04 18:42:38.843050 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-07-04T18:42:31.000000 | 2025-07-04 18:42:38.843061 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-07-04T18:42:32.000000 | 2025-07-04 18:42:38.843071 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-07-04T18:42:33.000000 | 2025-07-04 18:42:38.843127 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-04 18:42:39.102731 | orchestrator | 2025-07-04 18:42:39.102830 | orchestrator | # Neutron 2025-07-04 18:42:39.102848 | orchestrator | 2025-07-04 18:42:39.102857 | orchestrator | + echo 2025-07-04 18:42:39.102864 | orchestrator | + echo '# Neutron' 2025-07-04 18:42:39.102873 | orchestrator | + echo 2025-07-04 18:42:39.102880 | orchestrator | + openstack network agent list 2025-07-04 18:42:42.360532 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-04 18:42:42.360645 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-07-04 18:42:42.360661 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-04 18:42:42.360673 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-07-04 18:42:42.360714 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-07-04 18:42:42.360726 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-07-04 18:42:42.360737 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-07-04 18:42:42.360748 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-07-04 18:42:42.360758 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-07-04 18:42:42.360769 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-04 18:42:42.360779 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-04 18:42:42.360790 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-04 18:42:42.360801 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-04 18:42:42.637171 | orchestrator | + openstack network service provider list 2025-07-04 18:42:45.439745 | orchestrator | +---------------+------+---------+ 2025-07-04 18:42:45.439873 | orchestrator | | Service Type | Name | Default | 2025-07-04 18:42:45.439889 | orchestrator | +---------------+------+---------+ 2025-07-04 18:42:45.439902 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-07-04 18:42:45.439926 | orchestrator | +---------------+------+---------+ 2025-07-04 18:42:45.726274 | orchestrator | 2025-07-04 18:42:45.726376 | orchestrator | # Nova 2025-07-04 18:42:45.726392 | orchestrator | 2025-07-04 18:42:45.726404 | orchestrator | + echo 2025-07-04 18:42:45.726415 | orchestrator | + echo '# Nova' 2025-07-04 18:42:45.726426 | orchestrator | + echo 2025-07-04 18:42:45.726437 | orchestrator | + openstack compute service list 2025-07-04 18:42:48.463873 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-04 18:42:48.463968 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-07-04 18:42:48.463978 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-04 18:42:48.463987 | orchestrator | | ef224326-fa2c-4c4e-bf7e-2750cd573447 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-04T18:42:43.000000 | 2025-07-04 18:42:48.463995 | orchestrator | | b0dccf1f-4310-4ba4-bce2-b58d62d44cc1 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-04T18:42:48.000000 | 2025-07-04 18:42:48.464002 | orchestrator | | fc0a8941-de52-4b52-a3ce-18459a4684f4 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-04T18:42:39.000000 | 2025-07-04 18:42:48.464010 | orchestrator | | e9f2089a-c849-446e-a31b-2ec2af7c5da6 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-07-04T18:42:41.000000 | 2025-07-04 18:42:48.464033 | orchestrator | | d905356b-386a-4168-a38c-b9a5ed71f02f | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-07-04T18:42:43.000000 | 2025-07-04 18:42:48.464041 | orchestrator | | 7bfbcde6-a6f0-41fd-8003-5351e7fca9c5 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-07-04T18:42:43.000000 | 2025-07-04 18:42:48.464049 | orchestrator | | b93f05c2-7f4a-4001-9af8-5002f4a14c06 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-07-04T18:42:38.000000 | 2025-07-04 18:42:48.464057 | orchestrator | | 2d425b36-7dac-4f96-8646-7662a0655c36 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-07-04T18:42:39.000000 | 2025-07-04 18:42:48.464083 | orchestrator | | 50b70eb1-781c-4bf5-9658-fffc5e3f8026 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-07-04T18:42:39.000000 | 2025-07-04 18:42:48.464091 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-04 18:42:48.744581 | orchestrator | + openstack hypervisor list 2025-07-04 18:42:53.159909 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-04 18:42:53.160040 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-07-04 18:42:53.160065 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-04 18:42:53.160077 | orchestrator | | be36c443-d2a2-4996-bdc6-26ae460d739c | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-07-04 18:42:53.160088 | orchestrator | | 770f5feb-2190-49be-8d6f-0d1dafdf6255 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-07-04 18:42:53.160099 | orchestrator | | 6dff66f3-3527-4694-ba08-97686797485f | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-07-04 18:42:53.160145 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-04 18:42:53.445393 | orchestrator | 2025-07-04 18:42:53.445499 | orchestrator | # Run OpenStack test play 2025-07-04 18:42:53.445519 | orchestrator | 2025-07-04 18:42:53.445534 | orchestrator | + echo 2025-07-04 18:42:53.445550 | orchestrator | + echo '# Run OpenStack test play' 2025-07-04 18:42:53.445566 | orchestrator | + echo 2025-07-04 18:42:53.445580 | orchestrator | + osism apply --environment openstack test 2025-07-04 18:42:55.279746 | orchestrator | 2025-07-04 18:42:55 | INFO  | Trying to run play test in environment openstack 2025-07-04 18:42:55.285178 | orchestrator | Registering Redlock._acquired_script 2025-07-04 18:42:55.285265 | orchestrator | Registering Redlock._extend_script 2025-07-04 18:42:55.285280 | orchestrator | Registering Redlock._release_script 2025-07-04 18:42:55.350169 | orchestrator | 2025-07-04 18:42:55 | INFO  | Task 52e98c37-d032-4144-9a1a-b3ca017b8a07 (test) was prepared for execution. 2025-07-04 18:42:55.350270 | orchestrator | 2025-07-04 18:42:55 | INFO  | It takes a moment until task 52e98c37-d032-4144-9a1a-b3ca017b8a07 (test) has been started and output is visible here. 2025-07-04 18:42:59.384811 | orchestrator | 2025-07-04 18:42:59.390518 | orchestrator | PLAY [Create test project] ***************************************************** 2025-07-04 18:42:59.390849 | orchestrator | 2025-07-04 18:42:59.392458 | orchestrator | TASK [Create test domain] ****************************************************** 2025-07-04 18:42:59.395722 | orchestrator | Friday 04 July 2025 18:42:59 +0000 (0:00:00.080) 0:00:00.080 *********** 2025-07-04 18:43:03.192843 | orchestrator | changed: [localhost] 2025-07-04 18:43:03.195415 | orchestrator | 2025-07-04 18:43:03.197378 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-07-04 18:43:03.198419 | orchestrator | Friday 04 July 2025 18:43:03 +0000 (0:00:03.810) 0:00:03.890 *********** 2025-07-04 18:43:07.476306 | orchestrator | changed: [localhost] 2025-07-04 18:43:07.477935 | orchestrator | 2025-07-04 18:43:07.477977 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-07-04 18:43:07.477992 | orchestrator | Friday 04 July 2025 18:43:07 +0000 (0:00:04.282) 0:00:08.172 *********** 2025-07-04 18:43:13.660412 | orchestrator | changed: [localhost] 2025-07-04 18:43:13.660538 | orchestrator | 2025-07-04 18:43:13.660556 | orchestrator | TASK [Create test project] ***************************************************** 2025-07-04 18:43:13.660868 | orchestrator | Friday 04 July 2025 18:43:13 +0000 (0:00:06.185) 0:00:14.357 *********** 2025-07-04 18:43:17.996643 | orchestrator | changed: [localhost] 2025-07-04 18:43:17.997614 | orchestrator | 2025-07-04 18:43:17.998399 | orchestrator | TASK [Create test user] ******************************************************** 2025-07-04 18:43:18.000435 | orchestrator | Friday 04 July 2025 18:43:17 +0000 (0:00:04.336) 0:00:18.694 *********** 2025-07-04 18:43:22.302003 | orchestrator | changed: [localhost] 2025-07-04 18:43:22.302688 | orchestrator | 2025-07-04 18:43:22.303937 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-07-04 18:43:22.305784 | orchestrator | Friday 04 July 2025 18:43:22 +0000 (0:00:04.304) 0:00:22.998 *********** 2025-07-04 18:43:34.514270 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-07-04 18:43:34.514419 | orchestrator | changed: [localhost] => (item=member) 2025-07-04 18:43:34.514445 | orchestrator | changed: [localhost] => (item=creator) 2025-07-04 18:43:34.514464 | orchestrator | 2025-07-04 18:43:34.517292 | orchestrator | TASK [Create test server group] ************************************************ 2025-07-04 18:43:34.518462 | orchestrator | Friday 04 July 2025 18:43:34 +0000 (0:00:12.205) 0:00:35.204 *********** 2025-07-04 18:43:38.970547 | orchestrator | changed: [localhost] 2025-07-04 18:43:38.970657 | orchestrator | 2025-07-04 18:43:38.971493 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-07-04 18:43:38.973069 | orchestrator | Friday 04 July 2025 18:43:38 +0000 (0:00:04.463) 0:00:39.668 *********** 2025-07-04 18:43:44.156674 | orchestrator | changed: [localhost] 2025-07-04 18:43:44.156764 | orchestrator | 2025-07-04 18:43:44.157632 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-07-04 18:43:44.163427 | orchestrator | Friday 04 July 2025 18:43:44 +0000 (0:00:05.184) 0:00:44.853 *********** 2025-07-04 18:43:48.735637 | orchestrator | changed: [localhost] 2025-07-04 18:43:48.736396 | orchestrator | 2025-07-04 18:43:48.737627 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-07-04 18:43:48.740223 | orchestrator | Friday 04 July 2025 18:43:48 +0000 (0:00:04.579) 0:00:49.432 *********** 2025-07-04 18:43:53.260499 | orchestrator | changed: [localhost] 2025-07-04 18:43:53.261980 | orchestrator | 2025-07-04 18:43:53.264172 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-07-04 18:43:53.264926 | orchestrator | Friday 04 July 2025 18:43:53 +0000 (0:00:04.524) 0:00:53.957 *********** 2025-07-04 18:43:57.447412 | orchestrator | changed: [localhost] 2025-07-04 18:43:57.447536 | orchestrator | 2025-07-04 18:43:57.448400 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-07-04 18:43:57.449038 | orchestrator | Friday 04 July 2025 18:43:57 +0000 (0:00:04.186) 0:00:58.144 *********** 2025-07-04 18:44:02.058162 | orchestrator | changed: [localhost] 2025-07-04 18:44:02.058314 | orchestrator | 2025-07-04 18:44:02.058333 | orchestrator | TASK [Create test network topology] ******************************************** 2025-07-04 18:44:02.058943 | orchestrator | Friday 04 July 2025 18:44:02 +0000 (0:00:04.607) 0:01:02.752 *********** 2025-07-04 18:44:18.151223 | orchestrator | changed: [localhost] 2025-07-04 18:44:18.151371 | orchestrator | 2025-07-04 18:44:18.151389 | orchestrator | TASK [Create test instances] *************************************************** 2025-07-04 18:44:18.152724 | orchestrator | Friday 04 July 2025 18:44:18 +0000 (0:00:16.092) 0:01:18.844 *********** 2025-07-04 18:46:29.253232 | orchestrator | changed: [localhost] => (item=test) 2025-07-04 18:46:29.253365 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-04 18:46:29.253380 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-04 18:46:29.253390 | orchestrator | 2025-07-04 18:46:29.253401 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-04 18:46:59.252408 | orchestrator | 2025-07-04 18:46:59.252614 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-04 18:47:29.253417 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-04 18:47:29.253560 | orchestrator | 2025-07-04 18:47:29.253578 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-04 18:47:44.154429 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-04 18:47:44.154655 | orchestrator | 2025-07-04 18:47:44.154689 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-07-04 18:47:44.157105 | orchestrator | Friday 04 July 2025 18:47:44 +0000 (0:03:26.004) 0:04:44.848 *********** 2025-07-04 18:48:08.985402 | orchestrator | changed: [localhost] => (item=test) 2025-07-04 18:48:08.985771 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-04 18:48:08.985811 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-04 18:48:08.985854 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-04 18:48:08.985872 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-04 18:48:08.986177 | orchestrator | 2025-07-04 18:48:08.987820 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-07-04 18:48:08.988120 | orchestrator | Friday 04 July 2025 18:48:08 +0000 (0:00:24.831) 0:05:09.680 *********** 2025-07-04 18:48:42.050395 | orchestrator | changed: [localhost] => (item=test) 2025-07-04 18:48:42.050689 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-04 18:48:42.050726 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-04 18:48:42.053513 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-04 18:48:42.053576 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-04 18:48:42.054100 | orchestrator | 2025-07-04 18:48:42.054894 | orchestrator | TASK [Create test volume] ****************************************************** 2025-07-04 18:48:42.055421 | orchestrator | Friday 04 July 2025 18:48:42 +0000 (0:00:33.063) 0:05:42.743 *********** 2025-07-04 18:48:49.376708 | orchestrator | changed: [localhost] 2025-07-04 18:48:49.376842 | orchestrator | 2025-07-04 18:48:49.378164 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-07-04 18:48:49.379781 | orchestrator | Friday 04 July 2025 18:48:49 +0000 (0:00:07.328) 0:05:50.072 *********** 2025-07-04 18:49:03.022545 | orchestrator | changed: [localhost] 2025-07-04 18:49:03.022712 | orchestrator | 2025-07-04 18:49:03.022743 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-07-04 18:49:03.024483 | orchestrator | Friday 04 July 2025 18:49:03 +0000 (0:00:13.644) 0:06:03.717 *********** 2025-07-04 18:49:08.631171 | orchestrator | ok: [localhost] 2025-07-04 18:49:08.631311 | orchestrator | 2025-07-04 18:49:08.631911 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-07-04 18:49:08.633166 | orchestrator | Friday 04 July 2025 18:49:08 +0000 (0:00:05.608) 0:06:09.325 *********** 2025-07-04 18:49:08.681766 | orchestrator | ok: [localhost] => { 2025-07-04 18:49:08.683293 | orchestrator |  "msg": "192.168.112.123" 2025-07-04 18:49:08.683834 | orchestrator | } 2025-07-04 18:49:08.684694 | orchestrator | 2025-07-04 18:49:08.685531 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-04 18:49:08.686701 | orchestrator | 2025-07-04 18:49:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-04 18:49:08.686741 | orchestrator | 2025-07-04 18:49:08 | INFO  | Please wait and do not abort execution. 2025-07-04 18:49:08.687675 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-04 18:49:08.688267 | orchestrator | 2025-07-04 18:49:08.689111 | orchestrator | 2025-07-04 18:49:08.689951 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-04 18:49:08.690884 | orchestrator | Friday 04 July 2025 18:49:08 +0000 (0:00:00.051) 0:06:09.377 *********** 2025-07-04 18:49:08.691405 | orchestrator | =============================================================================== 2025-07-04 18:49:08.692717 | orchestrator | Create test instances ------------------------------------------------- 206.00s 2025-07-04 18:49:08.693666 | orchestrator | Add tag to instances --------------------------------------------------- 33.06s 2025-07-04 18:49:08.694138 | orchestrator | Add metadata to instances ---------------------------------------------- 24.83s 2025-07-04 18:49:08.694524 | orchestrator | Create test network topology ------------------------------------------- 16.09s 2025-07-04 18:49:08.695000 | orchestrator | Attach test volume ----------------------------------------------------- 13.64s 2025-07-04 18:49:08.695774 | orchestrator | Add member roles to user test ------------------------------------------ 12.21s 2025-07-04 18:49:08.696552 | orchestrator | Create test volume ------------------------------------------------------ 7.33s 2025-07-04 18:49:08.696976 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.19s 2025-07-04 18:49:08.697814 | orchestrator | Create floating ip address ---------------------------------------------- 5.61s 2025-07-04 18:49:08.698483 | orchestrator | Create ssh security group ----------------------------------------------- 5.18s 2025-07-04 18:49:08.698994 | orchestrator | Create test keypair ----------------------------------------------------- 4.61s 2025-07-04 18:49:08.699778 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.58s 2025-07-04 18:49:08.700209 | orchestrator | Create icmp security group ---------------------------------------------- 4.52s 2025-07-04 18:49:08.700796 | orchestrator | Create test server group ------------------------------------------------ 4.46s 2025-07-04 18:49:08.701278 | orchestrator | Create test project ----------------------------------------------------- 4.34s 2025-07-04 18:49:08.701738 | orchestrator | Create test user -------------------------------------------------------- 4.30s 2025-07-04 18:49:08.704159 | orchestrator | Create test-admin user -------------------------------------------------- 4.28s 2025-07-04 18:49:08.705203 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.19s 2025-07-04 18:49:08.706556 | orchestrator | Create test domain ------------------------------------------------------ 3.81s 2025-07-04 18:49:08.707405 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-07-04 18:49:09.213296 | orchestrator | + server_list 2025-07-04 18:49:09.213393 | orchestrator | + openstack --os-cloud test server list 2025-07-04 18:49:13.252153 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-04 18:49:13.252282 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-07-04 18:49:13.252297 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-04 18:49:13.252309 | orchestrator | | 7f0485ef-97ff-400f-9d5d-f4f87eb31acb | test-4 | ACTIVE | auto_allocated_network=10.42.0.59, 192.168.112.136 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-04 18:49:13.252334 | orchestrator | | b6ba8880-8743-440f-918b-5a90dc3ccaeb | test-3 | ACTIVE | auto_allocated_network=10.42.0.34, 192.168.112.198 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-04 18:49:13.252345 | orchestrator | | b076f055-06ec-4cad-b71f-451db8346a35 | test-2 | ACTIVE | auto_allocated_network=10.42.0.12, 192.168.112.145 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-04 18:49:13.252357 | orchestrator | | 5be32d25-f1f5-4fbc-8cad-4772ae011646 | test-1 | ACTIVE | auto_allocated_network=10.42.0.11, 192.168.112.101 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-04 18:49:13.252368 | orchestrator | | f6267ec1-7572-4e27-8f5f-e0bac7aa64c2 | test | ACTIVE | auto_allocated_network=10.42.0.53, 192.168.112.123 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-04 18:49:13.252379 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-04 18:49:13.547838 | orchestrator | + openstack --os-cloud test server show test 2025-07-04 18:49:17.181247 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:17.181349 | orchestrator | | Field | Value | 2025-07-04 18:49:17.181363 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:17.181391 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-04 18:49:17.181402 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-04 18:49:17.181412 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-04 18:49:17.181422 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-07-04 18:49:17.181432 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-04 18:49:17.181442 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-04 18:49:17.181461 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-04 18:49:17.181472 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-04 18:49:17.181497 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-04 18:49:17.181507 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-04 18:49:17.181517 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-04 18:49:17.181533 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-04 18:49:17.181547 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-04 18:49:17.181557 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-04 18:49:17.181567 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-04 18:49:17.181577 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-04T18:44:48.000000 | 2025-07-04 18:49:17.181587 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-04 18:49:17.181597 | orchestrator | | accessIPv4 | | 2025-07-04 18:49:17.181607 | orchestrator | | accessIPv6 | | 2025-07-04 18:49:17.181617 | orchestrator | | addresses | auto_allocated_network=10.42.0.53, 192.168.112.123 | 2025-07-04 18:49:17.181633 | orchestrator | | config_drive | | 2025-07-04 18:49:17.181678 | orchestrator | | created | 2025-07-04T18:44:26Z | 2025-07-04 18:49:17.181704 | orchestrator | | description | None | 2025-07-04 18:49:17.181722 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-04 18:49:17.181744 | orchestrator | | hostId | 42cfa82fa163d2d4d013a0e35d1d5f93c198e12f484c933aea843104 | 2025-07-04 18:49:17.181759 | orchestrator | | host_status | None | 2025-07-04 18:49:17.181770 | orchestrator | | id | f6267ec1-7572-4e27-8f5f-e0bac7aa64c2 | 2025-07-04 18:49:17.181781 | orchestrator | | image | Cirros 0.6.2 (6d28de46-ba60-48fc-bfca-6a2dd6cd9cc6) | 2025-07-04 18:49:17.181793 | orchestrator | | key_name | test | 2025-07-04 18:49:17.181805 | orchestrator | | locked | False | 2025-07-04 18:49:17.181816 | orchestrator | | locked_reason | None | 2025-07-04 18:49:17.181828 | orchestrator | | name | test | 2025-07-04 18:49:17.181853 | orchestrator | | pinned_availability_zone | None | 2025-07-04 18:49:17.181865 | orchestrator | | progress | 0 | 2025-07-04 18:49:17.181876 | orchestrator | | project_id | 3cd23a63a9cb44f2a82f66de788732e0 | 2025-07-04 18:49:17.181888 | orchestrator | | properties | hostname='test' | 2025-07-04 18:49:17.181905 | orchestrator | | security_groups | name='icmp' | 2025-07-04 18:49:17.181918 | orchestrator | | | name='ssh' | 2025-07-04 18:49:17.181931 | orchestrator | | server_groups | None | 2025-07-04 18:49:17.181943 | orchestrator | | status | ACTIVE | 2025-07-04 18:49:17.181956 | orchestrator | | tags | test | 2025-07-04 18:49:17.181969 | orchestrator | | trusted_image_certificates | None | 2025-07-04 18:49:17.181982 | orchestrator | | updated | 2025-07-04T18:47:49Z | 2025-07-04 18:49:17.182006 | orchestrator | | user_id | 7955a41091c4452196b0831ffd4c2543 | 2025-07-04 18:49:17.182092 | orchestrator | | volumes_attached | delete_on_termination='False', id='e531b6a5-ad81-4c5f-b79f-4343a6524fd0' | 2025-07-04 18:49:17.184904 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:17.447227 | orchestrator | + openstack --os-cloud test server show test-1 2025-07-04 18:49:20.794294 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:20.794419 | orchestrator | | Field | Value | 2025-07-04 18:49:20.794435 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:20.794447 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-04 18:49:20.794458 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-04 18:49:20.794470 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-04 18:49:20.794481 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-07-04 18:49:20.794516 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-04 18:49:20.794554 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-04 18:49:20.794566 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-04 18:49:20.794577 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-04 18:49:20.794607 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-04 18:49:20.794620 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-04 18:49:20.794631 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-04 18:49:20.794688 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-04 18:49:20.794701 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-04 18:49:20.794712 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-04 18:49:20.794723 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-04 18:49:20.794744 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-04T18:45:34.000000 | 2025-07-04 18:49:20.794755 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-04 18:49:20.794774 | orchestrator | | accessIPv4 | | 2025-07-04 18:49:20.794786 | orchestrator | | accessIPv6 | | 2025-07-04 18:49:20.794798 | orchestrator | | addresses | auto_allocated_network=10.42.0.11, 192.168.112.101 | 2025-07-04 18:49:20.794816 | orchestrator | | config_drive | | 2025-07-04 18:49:20.794828 | orchestrator | | created | 2025-07-04T18:45:12Z | 2025-07-04 18:49:20.794844 | orchestrator | | description | None | 2025-07-04 18:49:20.794855 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-04 18:49:20.794866 | orchestrator | | hostId | b8812f18e985d7a325b22b7ed8ed95800bb2088e8d8606c50408f59c | 2025-07-04 18:49:20.794878 | orchestrator | | host_status | None | 2025-07-04 18:49:20.794907 | orchestrator | | id | 5be32d25-f1f5-4fbc-8cad-4772ae011646 | 2025-07-04 18:49:20.794919 | orchestrator | | image | Cirros 0.6.2 (6d28de46-ba60-48fc-bfca-6a2dd6cd9cc6) | 2025-07-04 18:49:20.794930 | orchestrator | | key_name | test | 2025-07-04 18:49:20.794941 | orchestrator | | locked | False | 2025-07-04 18:49:20.794952 | orchestrator | | locked_reason | None | 2025-07-04 18:49:20.794964 | orchestrator | | name | test-1 | 2025-07-04 18:49:20.794982 | orchestrator | | pinned_availability_zone | None | 2025-07-04 18:49:20.794999 | orchestrator | | progress | 0 | 2025-07-04 18:49:20.795010 | orchestrator | | project_id | 3cd23a63a9cb44f2a82f66de788732e0 | 2025-07-04 18:49:20.795021 | orchestrator | | properties | hostname='test-1' | 2025-07-04 18:49:20.795032 | orchestrator | | security_groups | name='icmp' | 2025-07-04 18:49:20.795051 | orchestrator | | | name='ssh' | 2025-07-04 18:49:20.795062 | orchestrator | | server_groups | None | 2025-07-04 18:49:20.795073 | orchestrator | | status | ACTIVE | 2025-07-04 18:49:20.795085 | orchestrator | | tags | test | 2025-07-04 18:49:20.795096 | orchestrator | | trusted_image_certificates | None | 2025-07-04 18:49:20.795107 | orchestrator | | updated | 2025-07-04T18:47:54Z | 2025-07-04 18:49:20.795124 | orchestrator | | user_id | 7955a41091c4452196b0831ffd4c2543 | 2025-07-04 18:49:20.795135 | orchestrator | | volumes_attached | | 2025-07-04 18:49:20.799042 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:21.122921 | orchestrator | + openstack --os-cloud test server show test-2 2025-07-04 18:49:24.268106 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:24.268239 | orchestrator | | Field | Value | 2025-07-04 18:49:24.268371 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:24.268388 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-04 18:49:24.268400 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-04 18:49:24.268413 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-04 18:49:24.268424 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-07-04 18:49:24.268435 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-04 18:49:24.268446 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-04 18:49:24.268457 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-04 18:49:24.268484 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-04 18:49:24.268515 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-04 18:49:24.268539 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-04 18:49:24.268550 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-04 18:49:24.268561 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-04 18:49:24.268573 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-04 18:49:24.268584 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-04 18:49:24.268595 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-04 18:49:24.268606 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-04T18:46:13.000000 | 2025-07-04 18:49:24.268617 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-04 18:49:24.268628 | orchestrator | | accessIPv4 | | 2025-07-04 18:49:24.268641 | orchestrator | | accessIPv6 | | 2025-07-04 18:49:24.268697 | orchestrator | | addresses | auto_allocated_network=10.42.0.12, 192.168.112.145 | 2025-07-04 18:49:24.268726 | orchestrator | | config_drive | | 2025-07-04 18:49:24.268739 | orchestrator | | created | 2025-07-04T18:45:52Z | 2025-07-04 18:49:24.268752 | orchestrator | | description | None | 2025-07-04 18:49:24.268765 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-04 18:49:24.268778 | orchestrator | | hostId | d5918aa20724895b012616047ced8f7e5c115ef7024ca079622cef3b | 2025-07-04 18:49:24.268791 | orchestrator | | host_status | None | 2025-07-04 18:49:24.268803 | orchestrator | | id | b076f055-06ec-4cad-b71f-451db8346a35 | 2025-07-04 18:49:24.268816 | orchestrator | | image | Cirros 0.6.2 (6d28de46-ba60-48fc-bfca-6a2dd6cd9cc6) | 2025-07-04 18:49:24.268829 | orchestrator | | key_name | test | 2025-07-04 18:49:24.268842 | orchestrator | | locked | False | 2025-07-04 18:49:24.268855 | orchestrator | | locked_reason | None | 2025-07-04 18:49:24.268873 | orchestrator | | name | test-2 | 2025-07-04 18:49:24.268891 | orchestrator | | pinned_availability_zone | None | 2025-07-04 18:49:24.268903 | orchestrator | | progress | 0 | 2025-07-04 18:49:24.268914 | orchestrator | | project_id | 3cd23a63a9cb44f2a82f66de788732e0 | 2025-07-04 18:49:24.268933 | orchestrator | | properties | hostname='test-2' | 2025-07-04 18:49:24.268944 | orchestrator | | security_groups | name='icmp' | 2025-07-04 18:49:24.268955 | orchestrator | | | name='ssh' | 2025-07-04 18:49:24.268966 | orchestrator | | server_groups | None | 2025-07-04 18:49:24.268977 | orchestrator | | status | ACTIVE | 2025-07-04 18:49:24.268988 | orchestrator | | tags | test | 2025-07-04 18:49:24.268999 | orchestrator | | trusted_image_certificates | None | 2025-07-04 18:49:24.269022 | orchestrator | | updated | 2025-07-04T18:47:58Z | 2025-07-04 18:49:24.269040 | orchestrator | | user_id | 7955a41091c4452196b0831ffd4c2543 | 2025-07-04 18:49:24.269053 | orchestrator | | volumes_attached | | 2025-07-04 18:49:24.273834 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:24.576213 | orchestrator | + openstack --os-cloud test server show test-3 2025-07-04 18:49:27.938962 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:27.939073 | orchestrator | | Field | Value | 2025-07-04 18:49:27.939088 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:27.939100 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-04 18:49:27.939112 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-04 18:49:27.939123 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-04 18:49:27.939159 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-07-04 18:49:27.939171 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-04 18:49:27.939198 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-04 18:49:27.939210 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-04 18:49:27.939221 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-04 18:49:27.939250 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-04 18:49:27.939262 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-04 18:49:27.939273 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-04 18:49:27.939284 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-04 18:49:27.939295 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-04 18:49:27.939306 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-04 18:49:27.939325 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-04 18:49:27.939336 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-04T18:46:49.000000 | 2025-07-04 18:49:27.939348 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-04 18:49:27.939364 | orchestrator | | accessIPv4 | | 2025-07-04 18:49:27.939375 | orchestrator | | accessIPv6 | | 2025-07-04 18:49:27.939386 | orchestrator | | addresses | auto_allocated_network=10.42.0.34, 192.168.112.198 | 2025-07-04 18:49:27.939404 | orchestrator | | config_drive | | 2025-07-04 18:49:27.939416 | orchestrator | | created | 2025-07-04T18:46:32Z | 2025-07-04 18:49:27.939427 | orchestrator | | description | None | 2025-07-04 18:49:27.939439 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-04 18:49:27.939451 | orchestrator | | hostId | 42cfa82fa163d2d4d013a0e35d1d5f93c198e12f484c933aea843104 | 2025-07-04 18:49:27.939470 | orchestrator | | host_status | None | 2025-07-04 18:49:27.939483 | orchestrator | | id | b6ba8880-8743-440f-918b-5a90dc3ccaeb | 2025-07-04 18:49:27.939496 | orchestrator | | image | Cirros 0.6.2 (6d28de46-ba60-48fc-bfca-6a2dd6cd9cc6) | 2025-07-04 18:49:27.939514 | orchestrator | | key_name | test | 2025-07-04 18:49:27.939527 | orchestrator | | locked | False | 2025-07-04 18:49:27.939541 | orchestrator | | locked_reason | None | 2025-07-04 18:49:27.939554 | orchestrator | | name | test-3 | 2025-07-04 18:49:27.939573 | orchestrator | | pinned_availability_zone | None | 2025-07-04 18:49:27.939587 | orchestrator | | progress | 0 | 2025-07-04 18:49:27.939601 | orchestrator | | project_id | 3cd23a63a9cb44f2a82f66de788732e0 | 2025-07-04 18:49:27.939613 | orchestrator | | properties | hostname='test-3' | 2025-07-04 18:49:27.939633 | orchestrator | | security_groups | name='icmp' | 2025-07-04 18:49:27.939647 | orchestrator | | | name='ssh' | 2025-07-04 18:49:27.939690 | orchestrator | | server_groups | None | 2025-07-04 18:49:27.939703 | orchestrator | | status | ACTIVE | 2025-07-04 18:49:27.939736 | orchestrator | | tags | test | 2025-07-04 18:49:27.939761 | orchestrator | | trusted_image_certificates | None | 2025-07-04 18:49:27.939774 | orchestrator | | updated | 2025-07-04T18:48:03Z | 2025-07-04 18:49:27.939792 | orchestrator | | user_id | 7955a41091c4452196b0831ffd4c2543 | 2025-07-04 18:49:27.939807 | orchestrator | | volumes_attached | | 2025-07-04 18:49:27.943466 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:28.219476 | orchestrator | + openstack --os-cloud test server show test-4 2025-07-04 18:49:31.507592 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:31.507730 | orchestrator | | Field | Value | 2025-07-04 18:49:31.507746 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:31.507758 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-04 18:49:31.507770 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-04 18:49:31.507781 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-04 18:49:31.507792 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-07-04 18:49:31.507804 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-04 18:49:31.507815 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-04 18:49:31.507826 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-04 18:49:31.507837 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-04 18:49:31.507891 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-04 18:49:31.507905 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-04 18:49:31.507916 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-04 18:49:31.507927 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-04 18:49:31.507939 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-04 18:49:31.507967 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-04 18:49:31.507984 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-04 18:49:31.507995 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-04T18:47:28.000000 | 2025-07-04 18:49:31.508007 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-04 18:49:31.508018 | orchestrator | | accessIPv4 | | 2025-07-04 18:49:31.508029 | orchestrator | | accessIPv6 | | 2025-07-04 18:49:31.508050 | orchestrator | | addresses | auto_allocated_network=10.42.0.59, 192.168.112.136 | 2025-07-04 18:49:31.508068 | orchestrator | | config_drive | | 2025-07-04 18:49:31.508080 | orchestrator | | created | 2025-07-04T18:47:11Z | 2025-07-04 18:49:31.508091 | orchestrator | | description | None | 2025-07-04 18:49:31.508102 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-04 18:49:31.508113 | orchestrator | | hostId | b8812f18e985d7a325b22b7ed8ed95800bb2088e8d8606c50408f59c | 2025-07-04 18:49:31.508125 | orchestrator | | host_status | None | 2025-07-04 18:49:31.508141 | orchestrator | | id | 7f0485ef-97ff-400f-9d5d-f4f87eb31acb | 2025-07-04 18:49:31.508153 | orchestrator | | image | Cirros 0.6.2 (6d28de46-ba60-48fc-bfca-6a2dd6cd9cc6) | 2025-07-04 18:49:31.508165 | orchestrator | | key_name | test | 2025-07-04 18:49:31.508176 | orchestrator | | locked | False | 2025-07-04 18:49:31.508194 | orchestrator | | locked_reason | None | 2025-07-04 18:49:31.508206 | orchestrator | | name | test-4 | 2025-07-04 18:49:31.508223 | orchestrator | | pinned_availability_zone | None | 2025-07-04 18:49:31.508235 | orchestrator | | progress | 0 | 2025-07-04 18:49:31.508246 | orchestrator | | project_id | 3cd23a63a9cb44f2a82f66de788732e0 | 2025-07-04 18:49:31.508258 | orchestrator | | properties | hostname='test-4' | 2025-07-04 18:49:31.508269 | orchestrator | | security_groups | name='icmp' | 2025-07-04 18:49:31.508285 | orchestrator | | | name='ssh' | 2025-07-04 18:49:31.508297 | orchestrator | | server_groups | None | 2025-07-04 18:49:31.508308 | orchestrator | | status | ACTIVE | 2025-07-04 18:49:31.508325 | orchestrator | | tags | test | 2025-07-04 18:49:31.508337 | orchestrator | | trusted_image_certificates | None | 2025-07-04 18:49:31.508348 | orchestrator | | updated | 2025-07-04T18:48:08Z | 2025-07-04 18:49:31.508364 | orchestrator | | user_id | 7955a41091c4452196b0831ffd4c2543 | 2025-07-04 18:49:31.508376 | orchestrator | | volumes_attached | | 2025-07-04 18:49:31.513686 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-04 18:49:31.779328 | orchestrator | + server_ping 2025-07-04 18:49:31.780869 | orchestrator | ++ tr -d '\r' 2025-07-04 18:49:31.780900 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-04 18:49:34.730171 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-04 18:49:34.730245 | orchestrator | + ping -c3 192.168.112.198 2025-07-04 18:49:34.743011 | orchestrator | PING 192.168.112.198 (192.168.112.198) 56(84) bytes of data. 2025-07-04 18:49:34.743091 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=1 ttl=63 time=7.50 ms 2025-07-04 18:49:35.739923 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=2 ttl=63 time=2.42 ms 2025-07-04 18:49:36.741729 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=3 ttl=63 time=1.74 ms 2025-07-04 18:49:36.741843 | orchestrator | 2025-07-04 18:49:36.741866 | orchestrator | --- 192.168.112.198 ping statistics --- 2025-07-04 18:49:36.741884 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-04 18:49:36.741900 | orchestrator | rtt min/avg/max/mdev = 1.743/3.886/7.497/2.568 ms 2025-07-04 18:49:36.741918 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-04 18:49:36.741934 | orchestrator | + ping -c3 192.168.112.123 2025-07-04 18:49:36.753412 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2025-07-04 18:49:36.753508 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=7.71 ms 2025-07-04 18:49:37.751274 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=3.31 ms 2025-07-04 18:49:38.751168 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=1.68 ms 2025-07-04 18:49:38.751240 | orchestrator | 2025-07-04 18:49:38.751247 | orchestrator | --- 192.168.112.123 ping statistics --- 2025-07-04 18:49:38.751253 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-04 18:49:38.751277 | orchestrator | rtt min/avg/max/mdev = 1.684/4.236/7.713/2.546 ms 2025-07-04 18:49:38.751406 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-04 18:49:38.751418 | orchestrator | + ping -c3 192.168.112.136 2025-07-04 18:49:38.762938 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2025-07-04 18:49:38.762977 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=7.32 ms 2025-07-04 18:49:39.759347 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=2.81 ms 2025-07-04 18:49:40.760451 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=2.04 ms 2025-07-04 18:49:40.760505 | orchestrator | 2025-07-04 18:49:40.760511 | orchestrator | --- 192.168.112.136 ping statistics --- 2025-07-04 18:49:40.760516 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-04 18:49:40.760520 | orchestrator | rtt min/avg/max/mdev = 2.041/4.056/7.320/2.328 ms 2025-07-04 18:49:40.761138 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-04 18:49:40.761157 | orchestrator | + ping -c3 192.168.112.101 2025-07-04 18:49:40.772010 | orchestrator | PING 192.168.112.101 (192.168.112.101) 56(84) bytes of data. 2025-07-04 18:49:40.772048 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=1 ttl=63 time=7.02 ms 2025-07-04 18:49:41.767658 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=2 ttl=63 time=2.20 ms 2025-07-04 18:49:42.769166 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=3 ttl=63 time=1.95 ms 2025-07-04 18:49:42.769261 | orchestrator | 2025-07-04 18:49:42.769276 | orchestrator | --- 192.168.112.101 ping statistics --- 2025-07-04 18:49:42.769289 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2025-07-04 18:49:42.769301 | orchestrator | rtt min/avg/max/mdev = 1.953/3.722/7.018/2.332 ms 2025-07-04 18:49:42.769313 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-04 18:49:42.769325 | orchestrator | + ping -c3 192.168.112.145 2025-07-04 18:49:42.782448 | orchestrator | PING 192.168.112.145 (192.168.112.145) 56(84) bytes of data. 2025-07-04 18:49:42.782522 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=1 ttl=63 time=8.22 ms 2025-07-04 18:49:43.779096 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=2 ttl=63 time=2.96 ms 2025-07-04 18:49:44.779875 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=3 ttl=63 time=2.32 ms 2025-07-04 18:49:44.805486 | orchestrator | 2025-07-04 18:49:44.805551 | orchestrator | --- 192.168.112.145 ping statistics --- 2025-07-04 18:49:44.805566 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-04 18:49:44.805577 | orchestrator | rtt min/avg/max/mdev = 2.319/4.497/8.215/2.641 ms 2025-07-04 18:49:44.805602 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-07-04 18:49:45.153313 | orchestrator | ok: Runtime: 0:10:08.839132 2025-07-04 18:49:45.199845 | 2025-07-04 18:49:45.199997 | TASK [Run tempest] 2025-07-04 18:49:45.736751 | orchestrator | skipping: Conditional result was False 2025-07-04 18:49:45.755417 | 2025-07-04 18:49:45.755593 | TASK [Check prometheus alert status] 2025-07-04 18:49:46.298691 | orchestrator | skipping: Conditional result was False 2025-07-04 18:49:46.301827 | 2025-07-04 18:49:46.301991 | PLAY RECAP 2025-07-04 18:49:46.302178 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-07-04 18:49:46.302245 | 2025-07-04 18:49:46.540592 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-04 18:49:46.543261 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-04 18:49:47.334155 | 2025-07-04 18:49:47.334322 | PLAY [Post output play] 2025-07-04 18:49:47.361344 | 2025-07-04 18:49:47.361518 | LOOP [stage-output : Register sources] 2025-07-04 18:49:47.423183 | 2025-07-04 18:49:47.423565 | TASK [stage-output : Check sudo] 2025-07-04 18:49:48.256048 | orchestrator | sudo: a password is required 2025-07-04 18:49:48.465114 | orchestrator | ok: Runtime: 0:00:00.013143 2025-07-04 18:49:48.480999 | 2025-07-04 18:49:48.481208 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-04 18:49:48.518879 | 2025-07-04 18:49:48.519223 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-04 18:49:48.599368 | orchestrator | ok 2025-07-04 18:49:48.609952 | 2025-07-04 18:49:48.610122 | LOOP [stage-output : Ensure target folders exist] 2025-07-04 18:49:49.064937 | orchestrator | ok: "docs" 2025-07-04 18:49:49.065378 | 2025-07-04 18:49:49.318857 | orchestrator | ok: "artifacts" 2025-07-04 18:49:49.579492 | orchestrator | ok: "logs" 2025-07-04 18:49:49.594726 | 2025-07-04 18:49:49.594918 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-04 18:49:49.634608 | 2025-07-04 18:49:49.634960 | TASK [stage-output : Make all log files readable] 2025-07-04 18:49:49.938907 | orchestrator | ok 2025-07-04 18:49:49.948674 | 2025-07-04 18:49:49.948851 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-04 18:49:49.983668 | orchestrator | skipping: Conditional result was False 2025-07-04 18:49:50.000206 | 2025-07-04 18:49:50.000422 | TASK [stage-output : Discover log files for compression] 2025-07-04 18:49:50.026815 | orchestrator | skipping: Conditional result was False 2025-07-04 18:49:50.042309 | 2025-07-04 18:49:50.042494 | LOOP [stage-output : Archive everything from logs] 2025-07-04 18:49:50.094436 | 2025-07-04 18:49:50.094622 | PLAY [Post cleanup play] 2025-07-04 18:49:50.104315 | 2025-07-04 18:49:50.104562 | TASK [Set cloud fact (Zuul deployment)] 2025-07-04 18:49:50.150387 | orchestrator | ok 2025-07-04 18:49:50.161199 | 2025-07-04 18:49:50.161315 | TASK [Set cloud fact (local deployment)] 2025-07-04 18:49:50.185015 | orchestrator | skipping: Conditional result was False 2025-07-04 18:49:50.198234 | 2025-07-04 18:49:50.198377 | TASK [Clean the cloud environment] 2025-07-04 18:49:50.870828 | orchestrator | 2025-07-04 18:49:50 - clean up servers 2025-07-04 18:49:51.610397 | orchestrator | 2025-07-04 18:49:51 - testbed-manager 2025-07-04 18:49:51.691027 | orchestrator | 2025-07-04 18:49:51 - testbed-node-4 2025-07-04 18:49:51.798921 | orchestrator | 2025-07-04 18:49:51 - testbed-node-1 2025-07-04 18:49:51.886624 | orchestrator | 2025-07-04 18:49:51 - testbed-node-5 2025-07-04 18:49:51.975158 | orchestrator | 2025-07-04 18:49:51 - testbed-node-2 2025-07-04 18:49:52.066539 | orchestrator | 2025-07-04 18:49:52 - testbed-node-0 2025-07-04 18:49:52.152393 | orchestrator | 2025-07-04 18:49:52 - testbed-node-3 2025-07-04 18:49:52.238416 | orchestrator | 2025-07-04 18:49:52 - clean up keypairs 2025-07-04 18:49:52.257135 | orchestrator | 2025-07-04 18:49:52 - testbed 2025-07-04 18:49:52.282259 | orchestrator | 2025-07-04 18:49:52 - wait for servers to be gone 2025-07-04 18:50:05.486994 | orchestrator | 2025-07-04 18:50:05 - clean up ports 2025-07-04 18:50:05.664014 | orchestrator | 2025-07-04 18:50:05 - 01c30035-3d5f-47f0-acc5-ee5151432b5c 2025-07-04 18:50:05.899044 | orchestrator | 2025-07-04 18:50:05 - 05f380af-94fa-4e6a-87a3-81808ed32199 2025-07-04 18:50:06.312490 | orchestrator | 2025-07-04 18:50:06 - 2c840f6d-edad-47cc-b989-918b13186518 2025-07-04 18:50:06.563217 | orchestrator | 2025-07-04 18:50:06 - 54db80fc-347f-4012-bb8a-5e8bbd12407a 2025-07-04 18:50:06.777246 | orchestrator | 2025-07-04 18:50:06 - 88dfcb38-f4a5-4952-b23a-0f543de5037d 2025-07-04 18:50:07.086930 | orchestrator | 2025-07-04 18:50:07 - 9fd8a993-06a0-44ea-b014-4fa07b7b6916 2025-07-04 18:50:07.297157 | orchestrator | 2025-07-04 18:50:07 - cf0213a8-ce18-4f69-b725-9970be9f78d5 2025-07-04 18:50:07.498066 | orchestrator | 2025-07-04 18:50:07 - clean up volumes 2025-07-04 18:50:07.596317 | orchestrator | 2025-07-04 18:50:07 - testbed-volume-2-node-base 2025-07-04 18:50:07.636482 | orchestrator | 2025-07-04 18:50:07 - testbed-volume-1-node-base 2025-07-04 18:50:07.676798 | orchestrator | 2025-07-04 18:50:07 - testbed-volume-5-node-base 2025-07-04 18:50:07.716391 | orchestrator | 2025-07-04 18:50:07 - testbed-volume-0-node-base 2025-07-04 18:50:07.762666 | orchestrator | 2025-07-04 18:50:07 - testbed-volume-3-node-base 2025-07-04 18:50:07.803445 | orchestrator | 2025-07-04 18:50:07 - testbed-volume-4-node-base 2025-07-04 18:50:07.842969 | orchestrator | 2025-07-04 18:50:07 - testbed-volume-manager-base 2025-07-04 18:50:07.888146 | orchestrator | 2025-07-04 18:50:07 - testbed-volume-4-node-4 2025-07-04 18:50:07.930212 | orchestrator | 2025-07-04 18:50:07 - testbed-volume-3-node-3 2025-07-04 18:50:07.972082 | orchestrator | 2025-07-04 18:50:07 - testbed-volume-5-node-5 2025-07-04 18:50:08.015266 | orchestrator | 2025-07-04 18:50:08 - testbed-volume-8-node-5 2025-07-04 18:50:08.067168 | orchestrator | 2025-07-04 18:50:08 - testbed-volume-6-node-3 2025-07-04 18:50:08.112434 | orchestrator | 2025-07-04 18:50:08 - testbed-volume-0-node-3 2025-07-04 18:50:08.159376 | orchestrator | 2025-07-04 18:50:08 - testbed-volume-1-node-4 2025-07-04 18:50:08.207318 | orchestrator | 2025-07-04 18:50:08 - testbed-volume-2-node-5 2025-07-04 18:50:08.248889 | orchestrator | 2025-07-04 18:50:08 - testbed-volume-7-node-4 2025-07-04 18:50:08.291473 | orchestrator | 2025-07-04 18:50:08 - disconnect routers 2025-07-04 18:50:08.363317 | orchestrator | 2025-07-04 18:50:08 - testbed 2025-07-04 18:50:09.756951 | orchestrator | 2025-07-04 18:50:09 - clean up subnets 2025-07-04 18:50:09.809124 | orchestrator | 2025-07-04 18:50:09 - subnet-testbed-management 2025-07-04 18:50:09.976957 | orchestrator | 2025-07-04 18:50:09 - clean up networks 2025-07-04 18:50:10.156198 | orchestrator | 2025-07-04 18:50:10 - net-testbed-management 2025-07-04 18:50:10.446569 | orchestrator | 2025-07-04 18:50:10 - clean up security groups 2025-07-04 18:50:10.484106 | orchestrator | 2025-07-04 18:50:10 - testbed-management 2025-07-04 18:50:10.598326 | orchestrator | 2025-07-04 18:50:10 - testbed-node 2025-07-04 18:50:10.702905 | orchestrator | 2025-07-04 18:50:10 - clean up floating ips 2025-07-04 18:50:10.737813 | orchestrator | 2025-07-04 18:50:10 - 81.163.192.186 2025-07-04 18:50:11.087186 | orchestrator | 2025-07-04 18:50:11 - clean up routers 2025-07-04 18:50:11.191562 | orchestrator | 2025-07-04 18:50:11 - testbed 2025-07-04 18:50:12.263988 | orchestrator | ok: Runtime: 0:00:21.507749 2025-07-04 18:50:12.267959 | 2025-07-04 18:50:12.268105 | PLAY RECAP 2025-07-04 18:50:12.268195 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-04 18:50:12.268239 | 2025-07-04 18:50:12.398528 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-04 18:50:12.401150 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-04 18:50:13.208963 | 2025-07-04 18:50:13.209151 | PLAY [Cleanup play] 2025-07-04 18:50:13.225395 | 2025-07-04 18:50:13.225535 | TASK [Set cloud fact (Zuul deployment)] 2025-07-04 18:50:13.283956 | orchestrator | ok 2025-07-04 18:50:13.293887 | 2025-07-04 18:50:13.294080 | TASK [Set cloud fact (local deployment)] 2025-07-04 18:50:13.329735 | orchestrator | skipping: Conditional result was False 2025-07-04 18:50:13.347848 | 2025-07-04 18:50:13.348013 | TASK [Clean the cloud environment] 2025-07-04 18:50:14.543650 | orchestrator | 2025-07-04 18:50:14 - clean up servers 2025-07-04 18:50:15.108803 | orchestrator | 2025-07-04 18:50:15 - clean up keypairs 2025-07-04 18:50:15.127174 | orchestrator | 2025-07-04 18:50:15 - wait for servers to be gone 2025-07-04 18:50:15.169221 | orchestrator | 2025-07-04 18:50:15 - clean up ports 2025-07-04 18:50:15.264972 | orchestrator | 2025-07-04 18:50:15 - clean up volumes 2025-07-04 18:50:15.359093 | orchestrator | 2025-07-04 18:50:15 - disconnect routers 2025-07-04 18:50:15.392827 | orchestrator | 2025-07-04 18:50:15 - clean up subnets 2025-07-04 18:50:15.412155 | orchestrator | 2025-07-04 18:50:15 - clean up networks 2025-07-04 18:50:15.533080 | orchestrator | 2025-07-04 18:50:15 - clean up security groups 2025-07-04 18:50:15.565951 | orchestrator | 2025-07-04 18:50:15 - clean up floating ips 2025-07-04 18:50:15.590275 | orchestrator | 2025-07-04 18:50:15 - clean up routers 2025-07-04 18:50:15.897149 | orchestrator | ok: Runtime: 0:00:01.444443 2025-07-04 18:50:15.899929 | 2025-07-04 18:50:15.900074 | PLAY RECAP 2025-07-04 18:50:15.900155 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-04 18:50:15.900193 | 2025-07-04 18:50:16.035481 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-04 18:50:16.039494 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-04 18:50:16.840032 | 2025-07-04 18:50:16.840252 | PLAY [Base post-fetch] 2025-07-04 18:50:16.856914 | 2025-07-04 18:50:16.857091 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-04 18:50:16.933498 | orchestrator | skipping: Conditional result was False 2025-07-04 18:50:16.940953 | 2025-07-04 18:50:16.941138 | TASK [fetch-output : Set log path for single node] 2025-07-04 18:50:16.981552 | orchestrator | ok 2025-07-04 18:50:16.988352 | 2025-07-04 18:50:16.988481 | LOOP [fetch-output : Ensure local output dirs] 2025-07-04 18:50:17.558089 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/7885af844d2e46e9b44bce6c93c3bd94/work/logs" 2025-07-04 18:50:17.836515 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/7885af844d2e46e9b44bce6c93c3bd94/work/artifacts" 2025-07-04 18:50:18.108140 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/7885af844d2e46e9b44bce6c93c3bd94/work/docs" 2025-07-04 18:50:18.130414 | 2025-07-04 18:50:18.130552 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-04 18:50:19.104624 | orchestrator | changed: .d..t...... ./ 2025-07-04 18:50:19.104956 | orchestrator | changed: All items complete 2025-07-04 18:50:19.105007 | 2025-07-04 18:50:19.911798 | orchestrator | changed: .d..t...... ./ 2025-07-04 18:50:20.652252 | orchestrator | changed: .d..t...... ./ 2025-07-04 18:50:20.673360 | 2025-07-04 18:50:20.673501 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-04 18:50:20.702782 | orchestrator | skipping: Conditional result was False 2025-07-04 18:50:20.706290 | orchestrator | skipping: Conditional result was False 2025-07-04 18:50:20.721911 | 2025-07-04 18:50:20.722024 | PLAY RECAP 2025-07-04 18:50:20.722125 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-07-04 18:50:20.722164 | 2025-07-04 18:50:20.846009 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-04 18:50:20.847108 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-04 18:50:21.704316 | 2025-07-04 18:50:21.704485 | PLAY [Base post] 2025-07-04 18:50:21.719246 | 2025-07-04 18:50:21.719393 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-04 18:50:22.979120 | orchestrator | changed 2025-07-04 18:50:22.997776 | 2025-07-04 18:50:22.998015 | PLAY RECAP 2025-07-04 18:50:22.998233 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-04 18:50:22.998413 | 2025-07-04 18:50:23.131948 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-04 18:50:23.133302 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-04 18:50:23.954349 | 2025-07-04 18:50:23.954519 | PLAY [Base post-logs] 2025-07-04 18:50:23.965816 | 2025-07-04 18:50:23.965951 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-04 18:50:24.428756 | localhost | changed 2025-07-04 18:50:24.443981 | 2025-07-04 18:50:24.444237 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-04 18:50:24.471588 | localhost | ok 2025-07-04 18:50:24.476257 | 2025-07-04 18:50:24.476389 | TASK [Set zuul-log-path fact] 2025-07-04 18:50:24.492970 | localhost | ok 2025-07-04 18:50:24.504896 | 2025-07-04 18:50:24.505110 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-04 18:50:24.533383 | localhost | ok 2025-07-04 18:50:24.539863 | 2025-07-04 18:50:24.540022 | TASK [upload-logs : Create log directories] 2025-07-04 18:50:25.091951 | localhost | changed 2025-07-04 18:50:25.096874 | 2025-07-04 18:50:25.097062 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-04 18:50:25.632515 | localhost -> localhost | ok: Runtime: 0:00:00.009840 2025-07-04 18:50:25.639918 | 2025-07-04 18:50:25.640086 | TASK [upload-logs : Upload logs to log server] 2025-07-04 18:50:26.216740 | localhost | Output suppressed because no_log was given 2025-07-04 18:50:26.221045 | 2025-07-04 18:50:26.221254 | LOOP [upload-logs : Compress console log and json output] 2025-07-04 18:50:26.272372 | localhost | skipping: Conditional result was False 2025-07-04 18:50:26.280119 | localhost | skipping: Conditional result was False 2025-07-04 18:50:26.293761 | 2025-07-04 18:50:26.294013 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-04 18:50:26.351935 | localhost | skipping: Conditional result was False 2025-07-04 18:50:26.352520 | 2025-07-04 18:50:26.356094 | localhost | skipping: Conditional result was False 2025-07-04 18:50:26.369742 | 2025-07-04 18:50:26.369968 | LOOP [upload-logs : Upload console log and json output]